Tag: Transhumanism

  • Optic Technology, an NGO for the ethical development of technology

    Optic Technology, an NGO for the ethical development of technology

    The Dominican priest Eric Salobir has founded an NGO called Optic Technology. This is “a platform for interdisciplinary reflection and dialogue on the challenges of technology,” which seeks to “anticipate” the impact of technology on society. Not by lobbying but by “putting all the facts on the table”. This NGO is behind “ethical recommendations for each major issue raised by innovations such as artificial intelligence, big data, blockchain, etc.,” with the aim of achieving “good regulations”.

     

     In this context, an Optic Talk symposium was held on 14 May in Paris, addressing the topic of ‘rebuilding trust in technology’. Optic Technology believes that technology is “a solution for humanity” and not “a factor for complexity and distrust”.  However, there is no point “reassuring people on the cheap by saying that everything is fine”. We need rather to “anticipate” and “create the conditions for a sometimes difficult dialogue between researchers in the hard sciences and the humanities”. Technology “is bringing about a new society and is simply a product” that must continue to serve human beings. “Technology is neither good, bad, nor neutral. It is ambivalent. It generates as many positive as negative externalities on society. Optic Technology is therefore developing ethics by design methods to support companies in managing their innovation“.

    Le Figaro, Enguérand Renault (9/05/2019)

  • UNESCO promotes reflection on a “humanistic approach” to artificial intelligence

    UNESCO promotes reflection on a “humanistic approach” to artificial intelligence

    Last Monday, UNESCO’s director-general Audrey Azoulay opened the first world conference to promote a “humanistic approach” to artificial intelligence (AI).

     

    OECD secretary-general Angel Gurria stressed that all stakeholders, whether “academic, state or economic, are calling for general principles to guide artificial intelligence. He then stressed UNESCO’s essential role in coordinating the discussions. In April, UNESCO is due to examine the conclusions of the COMEST report on robotics ethics[1]. Ultimately, the goal is to draw up common global standards on AI ethics and implement “responsible” AI tools. 

     

    Two years ago, major AI companies, including IBM, Facebook, Google, Amazon, Apple and Microsoft, launched a similar initiative called the Partnership on AI to “introduce ethical reflection on the use of algorithms“. The Council of Europe, the European Commission, and many states, including France, have also set up this type of reflection. Writing about this subject, Cedric Villani, an LREM French MP and author of a report on artificial intelligence published a year ago, has stated that “the goal should not be to hastily establish standards but rather to open up the debate, compare points of view and find a basis for common values“.

     


    [1] UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology.

    Le Figaro, Enguérand Renault  (5/03/18) – L’Unesco veut définir une éthique de l’intelligence artificielle

     

  • Facial recognition system to diagnose genetic diseases: good or bad news?

    Facial recognition system to diagnose genetic diseases: good or bad news?

    Published in the journal Nature Medicine on 7 January 2019, researchers outlined the results of a study using facial recognition to detect genetic disorders.

     

    This new tool uses the “apparent characteristics of an individual“, i.e. the phenotype—in this case, the face—to detect hundreds of disorders and genetic variations. In fact, “many genetic disorders are accompanied by physical characteristics, such as a shorter nose and a larger forehead for Williams syndrome, for example, or almond-shaped eyes for Down syndrome”. An in-depth learning algorithm “analyses facial characteristics” and “compiles a list of possible syndromes”.

     

    To complete the study, 17,000 patient images representing more than 200 syndromes were used. For Dr Karen Gripp, co-author of the study: “With this study, we’ve shown that adding an automated facial analysis framework, such as DeepGestalt, to the clinical workflow can help achieve earlier diagnosis and treatment, and promise an improved quality of life” for the patients concerned.

     

    However, whilst the application can be used to “formulate an initial diagnosis”, “targeted examinations, or additional analyses” should confirm or refute this first assessment.

     

    Despite the advances proposed by this type of device, the researchers themselves are concerned about “the risks associated with this system which is used to identify a condition on the basis of a simple description. At a time when we are surrounded by images, hasty conclusions could be drawn; employers might want to exclude candidates deemed to pose a risk… based on a single photograph.

    RTS, Katja Schaer (10/01/2019) ; Medical Press, Christen Baglaneas (09/10/2019) ; Top santé, Mathilde Ragot (10/01/2019)

  • Jacques Testart vs Laurent Alexandre: Transhumanism in the spotlight

    Jacques Testart vs Laurent Alexandre: Transhumanism in the spotlight

    For Jacques Testart, transhumanism is “an infantile ideology that takes advantage of the extraordinary progress of technoscience over the last decade, to bring to light old, archaic myths about the super-intelligent, immortal, invincible human”. Facing Laurent Alexandre, he put forth various positions, knowing that “even though most transhumanist promises are in vain, they will have an impact on society”.

     

    Between repairing and augmenting the human body, the line is set at eugenics, “another word for transhumanism”, as Jacques Testart explains. This is a reality that Laurent Alexandre does not deny, saying: “IVF can be a tool for medicine or augmentation. It is important to distinguish the so-called ‘negative’ eugenics—in which the embryo’s DNA is examined before a decision is made whether or not to implant it— from ‘positive’ eugenics—in which one chooses among several embryos, or even, in the more or less near future, one alters the embryo’s DNA”. But embryo selection, adds the biologist, “allows one to practice ‘negative’ (eliminating, sterilising) eugenics and ‘positive’ (selecting the best, etc.) eugenics simultaneously. And the number and location of embryos outside the body make it incomparably more eugenic than abortion, for example”.

     

    When Laurent Alexandre says that it is important to avoid being left behind by those investing heavily in biotechnology and engage in new technologies on a massive scale, Jacques Testart asserts that “falling behind in the race to the edge can put us ahead in game of survival…” And he calls for a “major social shift”, saying, “People must be given something to dream about other than the contributions of technology and false promises of transhumanism”.

     

    In these cutting-edge fields, “regulations on ethical issues must be done at the global level. It should therefore be up to the UN to organise these citizen conventions on all major controversial issues to define limits with the force of law”, the researcher believes. One major challenge for Dr Laurent Alexandre, who has notes the “dichotomy between the technological revolutions that have engaged humanity for very long and our very short-term political systems. Our governments reason at 15 days, the Chinese government at 50 years. As for companies like the Big Four and BATX (Baidu, Alibaba, Tencent and Xiaomi), it is 1,000 years”.

     

    Jacques Testart is pessimistic about the future: “For the first time, humanity is facing its limits. This is a fundamental emergency that shouldn’t be treated lightly. The problem is that transhumanist ideology is intrinsically linked to the scientific belief—as naive as it is stubborn—in continuous progress”. He is concerned: “Transhumanists completely ignore the environment. They put Homo sapiens on the worktable and tinker with them. They stick electrodes on them, change their genes, etc., but humankind is nothing without nature, the environment, or the community!” 

     

    “Science creates just as many problems as it solves. Still, I don’t think it can be stopped”, says Laurent Alexandre. But for Jacques Testart, faced with unknown perspectives, greater awareness is necessary: “No past major shifts compare with what is being experienced now. Kids today addicted to tablets and smartphones. It’s a total change in behaviour. Young people are completely cut off from nature. They are in silico. There will be no memories, no mind. The situation is unprecedented and extremely serious. That is a proven fact. We are very much experiencing a collapse of civilization”.

    La Vie, Olivia Elkaim et Jean-Claude Nodé (14/06/2018)

  • China: smart cameras monitor concentration of high school pupils

    China: smart cameras monitor concentration of high school pupils

    At Hangzhou Number High School in eastern China, artificial intelligence monitors the attention span of pupils and provides feedback to the teacher. In fact, the school management has had three cameras installed at the front of the class and these “mystery eyes” to coin the expression of one pupil, “scrutinise the pupils’ facial expressions” and “record the slightest sign of distraction”. Seven moods are recorded: “neutral, happy, sad, disappointed, angry, afraid and surprised“, and pupils who concentrate get “A” grades whilst those who let “their minds wander” get “B” grades, according to the headmaster.

     

    Rejecting any claim that this infringes upon the privacy of its pupils, the establishment hopes that this system “will boost pupil performance”. Video surveillance and artificial intelligence are increasingly becoming “part and parcel of everyday life in China”. The following ‘statement’ appears on Hangzhou.com, a website managed by the central government: “Previously when I had classes that I didn’t like very much, I would be lazy and maybe take naps on the desk, or flick through other textbooks. But I don’t dare be distracted after the cameras were installed in the classrooms”.

     

    For further reading:

    Brain sensors used in China to detect employees’ emotions

    Facial recognition in China: a formidable monitoring tool

    20 minutes, Naomi Mackako (17/05/2018)

  • Artificial intelligence in medicine: regulations at the heart of the debate

    Artificial intelligence in medicine: regulations at the heart of the debate

    Although the States General on Bioethics consultation, which has just drawn to a close, highlighted numerous claims in terms of medically assisted reproduction and euthanasia, issues relating to the introduction of Artificial Intelligence (AI) in the medical field have also attracted attention. They raise “questions in terms of safety, respect for private life and protection of human dignity”, which are recognised by the CCNE (French National Consultative Ethics Committee), because “robots – simply operators at first and foremost – are gradually freeing themselves from the link that connects them to carers”.

     

    So where do humans fit in? Carers are gradually becoming dehumanised: “For the first time, the US Federal Drugs Agency has authorised artificial intelligence to make a diagnosis without medical supervision”. But let’s not forget automated prescriptions – who assumes responsibility in the event of medical error? And what about the patient’s right to be treated by a human being? Doctors must “respect the autonomy and dignity of [their] patient” (Hippocratic Oath), but what happens to empathy and human warmth if the machine has coldly calculated that treatment is futile or not sufficiently viable? The day will come when “only a machine will be authorised to administer treatment due to insurance and responsibility issues”.

     

    Advances in AI are driven by “the virtually limitless resources” of the American GAFAM (Google, Apple, Facebook, Amazon, Microsoft) and Chinese BATX (Baidu, Alibaba, Tencent, Xiaomi) giants in conjunction with international pharmaceutical laboratories “all of which have Ethics Committees and lobbyists ready to certify that every step is taken with the best of intentions based on sound rationale”. The main stake over the next few years will therefore be regulation despite the claims of advocates of freedom at all costs, who fear that the country will lag behind if medical advances are tracked.

     

    Even the European Parliament recognises the need for this regulation. In its resolution of 17 February 2017 on “Civil Law Rules on Robotics”[1], it called upon the Commission, Member States and the international community” to adopt powerful and adapted legal tools to deal with the current revolution because simple laws or directives cannot regulate a global change in our societies” Policies should cover this issue as a matter of urgency. Does the States General on Bioethics debate give this wake-up call to “set or, if need be, impose, the ethical rules that distinguish the acceptable from the unacceptable so that we remain human”?

     

    For further reading:

    Legal status of robots: experts warn the European Commission

    Artificial intelligence and health: “Who has the final say in terms of diagnosis?

    Five key regulations for creating an ethical AI framework

    Scientists warn of ethical issues relating to the use of artificial intelligence in medicine

    Artificial intelligence – new industrial revolution or the apocalypse?

    Artificial intelligence in medicine – Medical Association publishes white paper

     

    [1] “Civil Law Rules on Robotics”: http://www.europarl.europa.eu/news/en/press-room/20170210IPR61808/robots-and-artificial-intelligence-meps-call-for-eu-wide-liability-rules

     

    Causeur, Cyrille Dalmont (23/04/2018)

  • For Jacques Testart, transhumanism is a new form of eugenics

    For Jacques Testart, transhumanism is a new form of eugenics

    “Modernity has popularized the right to a child and is now engendering the right to a child of a certain quality.” Jacques Testart, the author of a new book entitled Au péril de l’humain, condemns transhumanism – which “aims to improve the human species through technology, for perfect health, physical and intellectual performance, and immortality” – as “the new name for eugenics”.

     

    Among advocates, there are extremists who “openly advocate the fusing of man with machine, [while] others who are more ‘moderate’, like Laurent Alexandre or Luc Ferry in France, believe in a kind of hyper-humanism based on technology”.  For all, it is “the meeting of archaic infantilism with unprecedented technological power”. Transhumanism exalts perfect health and immortality – “promises that can never be kept” while “life expectancy in the West is slowing”. He wants “to offer a new salvation through technological means to people who have proclaimed the death of God and who no longer have any political aspirations. It would also replace a struggling capitalism that had promised social progress and unlimited growth”.

     

    To better spread their theories, transhumanists believe that there is no alternative: “You either follow along or you get crushed”, but “following the slope is easy. Alignment with the worst is a bleak prospect.”

     

    When asked about the Estates General on bioethics, the researcher asserted that they are missing the public’s input, which was their raison d’être: “The Estates General on bioethics have become a simple update of the law based on scientific progress, and above all the demands of certain lobbyists. Every time, it’s a matter of adding new permissiveness rather than setting limits. We are told that there’s no alternative, but there’s no turning back either. Each law passed is an irreversible ‘step forward’. Soon it will be assisted reproduction and, in seven years’ time, with the next Estates General, surrogacy may even be legalized.”

     

    Behind the birth of Amandine, the first French test-tube baby, Jacques Testart explains: “I thought that it was a matter of test tubes, to fix a couple’s fertility issues, but I realized that something much deeper had happened anthropologically speaking. In 1986, in L’Œuf transparent, I tried to explain how playing sorcerer’s apprentice was revolutionary: from then on you could see the egg, the start of conception, even before birth. This pre-birth paved the way for pre-implantation genetic diagnosis (PGD, developed four years later), and thus for consensual eugenics […] In 1994, PGD was approved for those carrying serious genetic diseases. Since 2000, it has been expanded to those ‘at-risk for disease’, i.e. potentially everyone.” He laments: “Some people always want to take things further. The Inserm Ethics Committee and the Académie Nationale de Médecine would like to use PGD on all embryos. More than the expansion of medically assisted reproduction to ‘all women’ or the acceptance of surrogacy, I believe that the most crucial bioethical issue is this: the selection of embryos and of future persons. This is the real revolution that will explode as soon as embryos are manufactured by the dozen, and without medical servitude.”

    Le Parisien (08/04/2018) ; Le Figaro, Eugénrie Bastié (06/04/2018)

  • “It’s not my fault – my brain implant made me do it”

    “It’s not my fault – my brain implant made me do it”

    Treated for specific disorders – serious obsessive compulsive disorder and epilepsy, etc. – patients given deep brain stimulation have noticed changes in how they perceive things. The question of responsibility comes to the fore as neurotechnologies develop. In fact, how responsible is a person whose actions are influenced by their brain implant?

     

    The brain is generally considered the control centre where rational thoughts and emotions are created. As such, it orchestrates actions and behaviour. It is the key to autonomy and the legal and moral responsibility of human acts.

     

    Historically, moral and legal responsibility has largely focused on the independent individual, i.e. a person with the ability to deliberate or act on the basis of their own desires and projects with no disruptive external factors. With the advance of modern technology, however, numerous intermediaries are involved in how these brain implants function, including artificial intelligence programmes, which have a direct effect.

     

    Furthermore, is the individual person  responsible if an accident is deliberately caused under the influence of the implant? Or is the implant manufacturer at fault? Or the scientist? Or the health care professionals who implanted the device? If several stakeholders are involved in implanting a device, how can that responsibility be attributed? Insulin pumps and implantable cardiac defibrillators have already been pirated “in real life”. Who is responsible for these “pirate versions” that could have a highly detrimental effect on implants? At what point do these technologies take precedence over individual responsibility?

     

    This begs many questions. They should be addressed without delay so as to avoid being caught out by  new situations: “It’s not my fault – my brain implant made me do it”.

    The Conversation, Laura Y. Cabrera et Jennifer Carter-Johnson (03/04/2018)

  • Blind woman regains sight thanks to first bionic eye in Belgium

    Blind woman regains sight thanks to first bionic eye in Belgium

    Glasses fitted with cameras, an antenna-connected hand-held computer and a retinal implant that sends signals to the optic nerve make up Argus II – the bionic eye developed by Second Sight Medical Products[1]. The system allows patients to “see” artificially without restoring normal vision.

     

    Although 250 patients have already benefited from this treatment in the United States, Germany and Italy, this was a first for Belgium. Dr. Fanny Nerinckx, ophthalmologist and retinal surgeon, and her team at Ghent University Hospital, treated a patient with Retinitis Pigmentosa (RP) – a hereditary eye disease. “This condition is characterised by the degeneration of photo-sensitive cells within the eye. Vision gradually decreases, culminating in blindness. Treatment or intervention at an advanced stage of the disease has been impossible in Belgium up until now,” explains Doctor Nerinck.

     

    The mini-cameras on the glasses send images to a hand-held computer, which translates the images and sends them by antenna to the implant located on the retina. “The implant works like dead photosensitive cells and stimulates the intact nerve cells in the retina via 60 electrodes. The nerve cells transmit the signal via the optic nerve to the brain where the image eventually forms”.

     

    This technique is not possible for individuals who are blind from birth because patients must learn to see again, i.e. “to interpret camera images and connect them to images that they have already seen”. This exercise is similar to learning a foreign language but with images warranting prolonged rehabilitation.

     

    The operation was performed on 18 January. The patient was able to start wearing glasses from 1st February for two hours a day to begin with. Following intensive training, she could soon see flashes of light, then distinguish the lines and finally the geometric shapes in a contrasting colour. The next stage will be letter recognition followed by external elements (cars, pedestrian crossings, etc.) and finally recognition of people and their expressions.

     

    For further reading:

    United Kingdom: NHS to test “bionic eyes” in ten blind patients

     

    [1] A start-up specialising in medical devices to counteract blindness.

    RTBF (27/03/2018) ; 7sur7.be (27/03/2018)

  • Five key regulations for creating an ethical AI framework

    Five key regulations for creating an ethical AI framework

    In his work, MP Cedric Villani, who must submit his report on artificial intelligence (AI) tomorrow, has encountered the Ethik-IA (AI Ethics) initiative, which “seeks to create a positive framework for the use of AI and robotics in health care”. According to the initiative, the construction of this ethical framework must be based on “five key regulations”:

     

    • Patient informed consent,
    • The human guarantee of artificial intelligence,
    • The scale of the regulation depending on the sensitivity of the health data,
    • Promotion of career development,
    • Independent, external supervision.

     

    During his meeting with the MP, David Gruson, member of Ethik-IA and health chair of Sciences Po Paris, explained that “health will be a priority sector in his report. But the fundamental aspects of the report will be followed up in the États généraux de la bioéthique (Estates General on Bioethics) framework“.

     

    For his part, Jean-François Delfraissy, President of the Comité consultatif national d’éthique (CCNE) (French National Consultative Ethics Committee) had raised the issue “of consenting to a robot” since the launch of the general bioethics framework.

     

    Emmanuel Macron must reveal his vision “of AI in France” on 29 March, the submission date for the Villani report.

    Hospimédia, Jérôme Robillard (27/03/2018)

  • British man claims to be the world’s first “cyborg” – half man, half robot

    British man claims to be the world’s first “cyborg” – half man, half robot

    Thanks to an antenna, which has been attached to his brain since 2004, Neil Harbisson, who was born with achromatopsia – a condition that has condemned him to seeing everything in black and white – can now see colours. A sensor was used to transform light waves into vibrations which his brain has learned to analyse. This technique is comparable to synaesthesia – a natural, albeit rare phenomenon that allows humans to perceive things using several senses at once, i.e. to hear colours and see sounds, etc.

     

    Repaired human or augmented human? (see From “repair” to “augmentation” of human beings – where does transhumanism begin?) Neil Harbisson explains personally: “I see it as a part of my body, not a device, but an organ. I don’t wear an antenna, I have an antenna. It’s part of me”. He explains that, thanks to his implant, he can see more than tangible reality, namely infrared and ultraviolet spectra, not to mention even distant landscapes in space through the Internet connection in his brain.

     

    The British national therefore believes he is the world’s first “cyborg” and that the number of cyborgs is set to increase. His antenna appears on his passport photo, which, as far as he is concerned, proves that the British Government recognises that it is part of him. “More and more humans see themselves as cybernetics or cyborgs(…). In the future, we will see people with organic parts, cybernetics and (…). Governments must gradually accept these changes because some of their citizens will be part-human and part-technology”, he explained at the World Government Summit – an annual forum in Dubai where discussions focus on innovation, technology and futurism.

     

    Neil Harbisson is the co-founder of Cyborg Society and Transpecies Society, representing people who do not consider themselves as humans. Current membership figures are unknown.

    Daily Mail (14/02/2018)

  • Transhumanism: “The real danger for the future of mankind is the commercialisation of life”

    Transhumanism: “The real danger for the future of mankind is the commercialisation of life”

    In an interview with Le Figaro, Jean-Marie le Méné[1], President of the Fondation Lejeune, and Laurent Alexandre[2], surgeon and neurobiologist, compare their vision for the future of mankind. Whereas Laurent Alexandre believes that robots will replace humans, Jean-Marie Le Méné is of the opinion that “no software can ever imitate the complexity of the human soul. “The real danger for the future of mankind is the commercialisation of life.”

     

    For Laurent Alexandre, “the triumph of AI (artificial intelligence)”  is  “unavoidable”. This involves “boosting new morale” – an audacious wager because “AI speeds up time, questions benchmarks and pushes towards eugenics and neuro-technology. Elon Musk believes that AI can compete with human intelligence and will usurp humans in a decade or so. In his eyes, that warrants placing micro-processors in the brain of our children. AI is drawing us into an uncontrollable war of intelligences”. As for Jean-Marie Le Méné, he “does not believe in the “unavoidable” nature of future developments“Ideologically, these are ‘paper tigers’. Pacts between scientism and the market against nature are indicative of an ideology that will flounder”. He added that: “I haven’t got a miracle solution. We can’t turn the clock back through decades of philosophical void and intellectual deconstructivism. We have gone from theocentrism to anthropocentrism, biocentrism and now, today, to technocentrism. Humans count for nothing any more and transhumanism prospers in this vacuum. (…) But to say that humans have to be repaired in order to be ‘saved’ is tantamount to considering that there is a technical solution to the folly of mankind, which is wrong”.

     

    Although AI challenges the working world, Laurent Alexandre believes that technology replacing “the human brain in the short term [will] mostly impact upon less gifted individuals and lead to a dramatic rise in social tensions”. As far as Jean-Marie Le Méné is concerned, “as with any rapid scientific and technological revolution, there will be some major upheavals. It’s not the first time and it’s not shocking. (…) The real problem is the commercialisation of life. Life has become a lucrative source to be exploited. The outcome? Mankind is falling apart. Add to that transhumanism: it is falling apart but is worth its weight in gold. Transhumanism is a form of modern slavery that sells humans no longer as a unit but bit by bit. (…) This form of commercialisation produces victims. Unable to create augmented humans, we are already suppressing reduced humans”. Indeed, “transhumanism will be eugenic through the selection and modification of embryos. Transhumanists want to create super babies capable of resisting AI. They also want to cure Down syndrome. Yes, transhumanists are eugenists,” comments Laurent Alexandre, as confirmed by the words of Jean-Marie Le Méné: “Negative eugenics is already in place in MAP, which gets rid of imperfect embryos and wants to go much further because we are already creating embryos with three DNA”.

     

    These issues desperately need  “international regulation” to avoid  “transhumanist nomadism” according to Laurent Alexandre. But Jean-Marie Le Méné wonders about the legislator’s ability to resist new technologies, which risks “accepting everything in the name or technical progress”, all too quickly absorbed in “moral progress”.

     

    [1] Author of Les premières victimes du transhumanisme (The first victims of transhumanism), published by Pierre-Guillaume de Roux.

    [2] Author of La guerre des intelligences (The war of the Intelligences), published by JC Lattès.

    Le Figaro Magazine, Alexandre Devecchio et Aziliz Le Corre (01/12/2017)

  • A brain implant to improve memory

    A brain implant to improve memory

    Scientists at the University of Southern Carolina (USA) have allegedly increased the memory function of volunteers by up to 30% thanks to a brain implant. The scientists presented this “world first” during the annual meeting of the Society for Neuroscience, which was held in Washington D.C. from 11 to 15 November 2017.  

     

    The three-year study focused on 20 volunteers “suffering from epilepsy” and already “fitted with electrodes targeting the brain hippocampus” – the region of the brain which is  “vital for learning and memory“.

     

    In the first part of the study, the scientists collected “data on stimulated brain activity” during memorisation exercises, with no external electrical stimulation. By modelling the recordings, they were able to work on specific areas that had been activated. During the second part of the study, the scientists delivered “small electric shocks to the hippocampus” via implants located in this area. These signals were “the same as the natural signals we receive when we try to remember something“. The implant thus “reinforces the natural memorisation process“.

     

    Professor Dong Song, who led the research, highlighted a 15% improvement in short-term memory and a 25% improvement in the working memory: ” We are writing the neuronal code to improve memory function. This has never been done before” , he declared.

     

    If the device could “help to overcome Alzheimer’s disease”, it would be another step on the road to human enhancement.

     

    Usbeck & Rica, Maylis Haegel (21/11/2017)

  • Robotics: trivialised up-dates give cause for concern

    Robotics: trivialised up-dates give cause for concern

    At the end of October, a humanoid robot was given Saudi nationality (see Humanoid robot awarded Saudi nationality). This week, an android with the features of a small boy became an official resident of Tokyo. For Laetitia Pouliquen, founder of Woman Attitude and NBICethics, these announcements raise the following questions: “What does granting residence or citizenship along with civic rights to these ‘intelligent’ electronic machines actually mean and what are the repercussions for Humanity? (…) Are the Saudi and Tokyo decisions epiphenomena, paying lip-service to robot manufacturers and the relevant countries per se, or do they actually pose a real technological threat for Humanity?” This news is seen in “a broader context” and Laetitia Pouliquen links it to the resolution taken by the European Parliament in February 2017, which stated that,  “the creation of a legal identity for robots with artificial intelligence, allowing them to make independent decisions, harbours the risk of major anthropological rupture” (see : European Parliament: are robots equal to humans?). Giving robots a legal identity will lead to confusion between humans and the humanoid appearance of the robot and, furthermore, manufacturers will be exempt from their responsibilities. The question is both  “pressing and relevant”, and this news is  “far more serious than it appears”“Let’s defend our human dignity threatened by the non-ethical use of robotics using artificial intelligence!” concludes Laetitia Pouliquen.

     

    Further reading: Robots at a time of transhumanism

     

    Le Figaro, Laetitia Pouliquen (6/11/2017)

  • A humanoid robot obtains Saudi citizenship

    A humanoid robot obtains Saudi citizenship

    On 25 October, Saudi Arabia granted Saudi citizenship to a humanoid robot, Sophia, created by the  Hanson Robotics Company. This is a world first, unveiled during the  Future Investment Initiative conference in Riyadh: “I am truly honoured and proud of this unique distinction. I want to live and work with humans, (…) I will do my best to make the world a better place”, announced the robot which “can reproduce dozens of extremely realistic facial expressions”. The android  “has cameras instead of eyes with an algorithm equipped to recognise human faces and establish visual contact”. It is  “capable of detecting human language, answering questions and memorising interactions and faces that it sees”.

     

    In Europe, the Delvaux report, adopted in February 2017, is encouraging “the introduction of legal status for autonomous robots” as a solution given the unpredictability of autonomous robots and the non-liability of manufacturers in this respect. Robots are considered “active participants” and no longer as tools. For Nathalie Nevejans, Private Law Lecturer and Legal and Ethical Expert in Robotics at Douai University, “this provision is inadequate”. “Legal status has already been given to ‘things’ without a conscience or feelings such as companies”, but “human beings are then in the driving seat”. However, “when we refer to the actions of an autonomous robot, which could become ‘unpredictable’, there is no real human intervention”. The confusion between humans and the humanoid appearance of autonomous robots does not justify this kind of measure. Nathalie Nevejans, who was interviewed on this subject at the European Parliament at the end of September, believes that the “Machine” European Directive is sufficient to resolve the problems associated with autonomous robots: “If a robot becomes unpredictable, this can be attributed to a total breakdown in its initial programming, and this would therefore be a programming and design fault”, which, in turn, can be attributed to the manufacturer who  “ought not to make a dangerous machine commercially available”. The ultimate danger of the Delvaux report is to lead us to consider humans as machines. Wouldn’t the granting of legal status to robots be tantamount to “preparing society for the fantasy whereby the inanimate replaces the animate?”

    Institut européen de bioéthique (25/10/2017); Sputnik (26/10/2017); Mediapart (27/10/2017)

    Photo : Freeimages

  • An “ethical black box” urgently required for robots

    An “ethical black box” urgently required for robots

    Robots are gradually entering the public domain and are increasingly interacting with humans. With regard to ethical questions, university lecturers are advocating a “black box” for robots, like the ones on aeroplanes, in an attempt to monitor and explain the decisions taken by robots in the case of accidents, for instance.

     

    This is one of the suggestions put forward during the latest conference at the University of Surrey, in the United States, to off-set concerns about robots operating independently without human control.  “We would hope that accidents are rare, but they are nevertheless inevitable,” stated Alan Winfield, Professor of Robot Ethics at the University of the West of England, in Bristol. 

     

    Once installed in a robot, the ethical black box would record the robot’s decisions and the basis for those decisions, its movements and the information picked up through its sensors such as cameras, microphones and viewers. Alan Winfield believes that this step could facilitate “serious accident” investigation.

     

    The robot could then use the same device to record its movements in simple language and explain to users why it decided on a certain course of action.

    The Guardian, Ian Sample (19/07/2017)

  • Responding to the challenges of artificial intelligence imposes ethical reflection

    Responding to the challenges of artificial intelligence imposes ethical reflection

    Artificial intelligence specialist, Jean-Gabriel Ganascia, warns of the importance of ethical reflection given the massive influx and use of data by artificial intelligence systems.

     

     “In the light of devices that track and preserve information, it is not possible to forget,” he explained. “On a personal level, the information collected will follow a person for his/her entire life without that person having any hold over it, but it will not be the same ten or twenty years down the line. Collectively, pardon and peace are linked to a certain element of forgetfulness when it comes to information”. And he wonders how to reconcile these opposites.

     

    The second issue involves delegation to machines: “under the pretext of efficiency, there is a huge risk that a machine-based decision will take precedence over a human decision because it avoids taking responsibility”. Artificial intelligence decisions are based on autonomous learning which can make them unpredictable. They are taken furtively and warrant the introduction of a certain number of human values. It is also vital  “to make human beings responsible for their acts and not to delegate those tasks to machines”.

     

    This calls for immediate reflection. In fact, “in some areas of the United States, predictive justice is already based on predicting recidivism to determine the sentence, which is worrying”. Furthermore, the baseline indicators are unreliable and may lead to “a form of implicit discrimination in terms of data collection”. In the insurance field, groups will establish individual risk on “perfectly discriminatory bases”.

     

    These applications warrant the creation of a framework, a key question whereas “conventional standards are floundering”. “Therefore, something needs to be changed”. This is a demanding challenge in a context where technology is gaining ground and accelerated knowledge leads to “a kind of intoxication”.

    le Monde, Laure Belot (04/07/2017)

  • “Killing off death”: a controversial trial in Latin America

    “Killing off death”: a controversial trial in Latin America

    “Killing off death”. This transhumanist objective is at the heart of a controversial trial to resuscitate brain-dead patients – the Reanima project. The biotechnology company, Bioquark, which already created a stir back in May 2016 (see “ReAnima” trial: scientists want to bring brain-dead patients back to life”), is considering starting a new trial in Latin America over the next few months. The protocol, which is identical to the first trial announced and later dropped in India[1], involves injecting a patient’s stem cells into his/her spinal cord to stimulate neurone growth, encourage neurone inter-connection “and thus bring the brain to life”. Additional  “therapies” will also be used, namely the injection of a combination of proteins into the spinal cord, electric nerve stimulation and laser treatment of the brain. Bioquark wants to enrol twenty patients in this trial.

     

    Many questions have yet to be answered: “Who decides whether a patient is actually brain dead? How can a deceased person take part in the trial? What happens if the patients are ‘resuscitated’ and prove to be severely disabled? Aren’t scientists playing with the family’s hopes? Would ethical approval be granted, even in Latin America?”.

     

    Many scientists and bioethicists are accusing Bioquark of “charlatanism” and of “abusing the hopes of grieving families”. Bioquark MD, Ira Pastor, acknowledges the fact that the concept is “audacious”, but thinks it is feasible, as evidenced with the resuscitation of young brain-dead patients: “Such cases emphasise that things are not always black and white in our understanding of serious problems affecting consciousness,” he explained. Furthermore, preclinical testing in animals would have already been carried out and various protocol phases have already been tested in living humans with brain injuries. The outcome has been positive.

     

    [1] The trial announced in Rudrapur, India, in April 2016 never included patients. It was closed in November 2016. 

    BioEdge, Michael Cook (2/06/2017); STAT, Kate Sheridan (1/06/2017)

  • Delvaux report adopted: the European Parliament enters into transhumanist fiction

    Delvaux report adopted: the European Parliament enters into transhumanist fiction

    Yesterday, the European Parliament adopted the Delvaux report by 313 votes in favour, 285 against and 20 abstentions. Assembly Members therefore voted “for the creation of a ‘legal identity’ for robots and human augmentation or enhancement”. Only reference to the universal basic income “as the solution proposed to off-set a robotics-related loss of employment” was rejected  – “poor consolation”  according to Europe for Family.

     

    Supporters and critics of this bill believe that a European legal robotics framework is required. However, the framework definition given in the Delvaux Report “takes Parliament into the fictitious transhumanist world where machines are equal or even superior to man”, according to Europe for Family. This is a case of “turning civilisation upside down with human enhancement, the power of robotic industries and GAFA[1]promoting the development of nano, bio, information and cognitive (NBIC) technologies”. Assigning a legal identity to automated robots “is synonymous to granting rights and giving obligations to autonomous robots capable of making decisions”. Thus everyone – the robot manufacturer, user, designer or owner – is exonerated of his/her responsibilities. It heralds “the onset of confusion between man and machine, animate and inanimate, human and non-human”.

     

    Rejection of the universal basic income has, however, been welcomed because “work recognises the human dignity of every individual and nothing can take that away”.

     

    [1] Google, Apple, Facebook, Amazon.

    Europe for Family (16/02/2017)

  • A Company implants a microchip under the skin of its employees

    A Company implants a microchip under the skin of its employees

    In Belgium, a sub-cutaneous microchip the size of a grain of rice has been implanted between the thumb and index finger of eight volunteers working for the NewFusion Company. The microprocessor allows them “to open the entrance door to the Company or activate their computer like a badge”. The microchip contains personal data and, as Vincent Nys, spokesperson for NewFusion explained, “if you place a Smartphone in front of it, you can immediately send your contact details to someone”.

     

    The initiative has raised some concerns, especially with regard to private life: “It’s a real risk,” said Alexis Deswaef, President of the Human Rights’ League at the RTBF (public broadcasting organisation of the French-speaking community of Belgium). Employees are now being monitored from under their skin. It is a total control tool. You know exactly when an employee started to work for the Company and when he/she takes a cigarette break. Then you will analyse if he/she is sufficiently productive? What will this collection of data be used for? Will we have to relinquish some of our rights to a private life for greater security and comfort in the future? “.

    Europe 1 (04/02/2017) ; Le Parisien (03/02/2017)