Peter 2.0: The Human Cyborg (2020)
Channel 4 Television Corporation.
Authors: Dr David R. Painter, Associate Professor Alan Wee-Chung Liew and Dr David Tuffley
“There is no technology I would not consider… to see just how far we can turn science fiction into science reality. My whole science education when I was young was Dr Who and Star Trek. One of the most important things I learned, was that any problem could be solved if you were bright enough, brave enough, and had access to really, really cool technology.”
Peter 2.0: The Human Cyborg – A Synopsis
Dr David R. Painter
Research Fellow, The Hopkins Centre
Peter: The Human Cyborg documents Dr Peter Scott Morgan’s battle against motor neurone disease (MND), a progressive and terminal neurological condition that causes loss of muscle control, paralysis, and locked in-syndrome (full awareness, but an inability to move or speak). Most people with MND die from respiratory failure within 3-5 years. Peter’s plan is to save his life by embracing machines:
There is no technology I would not consider… to see just how far we can turn science fiction into science reality. My whole science education when I was young was Dr Who and Star Trek. One of the most important things I learned was that any problem could be solved if you were bright enough, brave enough, and had access to really, really cool technology.
Peter’s vision includes replumbing his stomach, breathing through a tube, cloning his voice, transmitting his thoughts via brain activity, and controlling an exoskeleton.
This is not as farfetched as it first sounds, given the current state of technology. However, it quickly becomes apparent that Peter is not only struggling against death, but also racing against time. Exoskeletons and brain activity control are the first plans to be shelved. This reflects not only the current state of those technologies but also Peter’s priorities.
The need to communicate should perhaps be listed alongside the other fundamentals: oxygen, water, and food. The locked-in syndrome that awaits at the end of MND might be the most isolating symptom of all. Peter’s focus quickly becomes interfacing with a communication system based on eye tracking to select the letters in turn to spell words, which is combined with predictive text to speed typing.
Peter spends weeks in a recording studio with computer scientists at The University of Edinburgh, rehearsing sentences to clone his voice. The first results are disappointing, with Peter’s partner describing the artificial voice euphemistically as “a little bit robotic”. Fortunately, further retraining of the algorithm produced better results, with the voice now capable of performing a convincing rendition of the classic song “Pure Imagination”. The artificial voice expressed emotion and sounded much more like Peter.
Eye tracking communication, whether combined with a realistic or artificial voice, is simply too slow to allow conversation at normal speeds. With computer scientists at Intel, Peter discusses the possibility of a radically different approach, that an AI would speak on Peter’s behalf using his own vocabulary.
“I am increasingly turning into a full cyborg and adding more and more AI, which will simply make me even more of a cyborg. I will never stop being human, but maybe I will help to change what it means to be human.”
“Showing courage despite the odds, Peter makes us pause to ask what it means to be human. The capacity to embrace change appears to be a quintessential virtue we have ignored for far too long.”
Although there will be similarities in the individual’s response to mortal challenges, there will also be idiosyncrasies (including peculiar habits unique to the individual). Being embodied in an avatar might be important for some. Likewise, exoskeletons and brain-computer-interfaces might be important for others.
In work conducted with collaborators at The University of Queensland, we explored the feasibility of brain-computer interfaces for free communication1. We made several improvements to existing brain-computer interface systems that were sufficient to allow text-based messaging between two healthy adults, based on their brain activity alone.
“Such systems, although in their infancy, lay the groundwork for future clinical applications, including the development of assistive devices for people with MND”.
The development of new devices requires the marriage of basic and applied sciences and codesign with clinicians and end users. The Hopkins Centre is currently undertaking work in this area, including the development of new virtual reality (VR) games for brain injury assessment and rehabilitation.
When designing AI, ethical considerations must be at the forefront. As Dr Rachel Thomas, a pioneer of AI ethics at The Queensland University of Technology, notes, problems with AI include feedback loops, which limit individual autonomy, amplification of societal biases, a difficulty in identifying system errors, and abdication of personal responsibility.
Solutions include methods to identify errors quickly, timely and meaningful appeals, consultation with marginalised voices, diversity in hiring and promotions, designing products with contestability in mind, and a closer integration of product engineering with trust and safety.
Figure 1 Peter had his biological voice removed.
Image Credit: Channel Four Television Corporation
& Facebook/Dr Peter Scott-Morgan
Commentary: Peter 2.0: The Human Cyborg
The documentary Peter: The Human Cyborg that documents
Dr Peter Scott-Morgan’s (@DrScottMorgan) battle against motor neurone disease (MND) demonstrates what technology is capable of now, and points to a future which sounded like science fiction even a decade or two ago.
Peter noted that the ability to communicate is as important to him as other life fundamentals of oxygen, water and food. Before losing his ability to speak due to laryngectomy, Peter cloned his voice and his face so that he could communicate through an avatar.
As the documentary shows, after a few iterations of technical improvement, Peter was eventually able to communicate with others through a fairly realistic avatar and a synthesised voice that resembles Peter’s own.
Artificial Intelligence (AI) and Deepfake Technology
One of the technologies that is behind this amazing feat is AI. AI has progressed to the stage that many seemingly impossible tasks are becoming a reality now. Take for example Deepfake, AI-powered deep learning techniques that can synthesise realistic-looking fake videos, images, audio and other media. Deepfake technology can now generate imagery that can fool human or automated face recognition systems, the so-called deepfake attack. This has prompted AI researchers (including my group2) to come up with effective countermeasures that could defend against a deepfake attack.
We could expect that very soon Peter will be able to use deep fake technology to generate an avatar and synthesised speech, that not only doesn’t sound “a little bit robotic” but would be indistinguishable from live capture of a real Peter! The only major technological hurdle would be to do so in real time and with easily available computing resources.
The desire to communicate, while losing the ability to move and speak prompted Peter to use an AI communication system based on eye tracking that selects letters from a computer screen to spell words, with which the AI then generates a complete predictive text sentence (using vocabulary learned from Peter) for speech synthesis. Peter was able to use this system to converse with others in several TV interview footages.
“We could expect that very soon Peter will be able to use deepfake technology to generate an avatar and synthesised speech, that not only doesn’t sound a “little bit robotic” but would be indistinguishable from live capture of a real Peter!”
Nevertheless, recent advancement in AI and deep learning technology in several fronts, such as neural natural language processing (language modelling, machine translation, question answering), neural style transfer (voice conversion), etc., would very soon dwarf what we see in the 2020 released documentary. One could expect that very soon Peter will be able to interact with others as though he is in front of a webcam, just like we do in online meetings during COVID lockdown!
AI researchers are constantly pushing the boundary/capability of machine learning. For example, my group is working on continual learning3 or lifelong learning, where AI algorithms can learn and improve itself over time to handle new tasks, much like humans do. My group is also working with machine reasoning by combining techniques in deep learning with knowledge representation (through knowledge graphs).
Artificial General Intelligence (AGI)
AI technology is advancing at such an exponential rate that many researchers are starting to talk about the possibility of Artificial General Intelligence (AGI).
AGI is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. Although it might still be decades until we are able to create machines that can experience sentience, self-awareness and consciousness, things are nevertheless progressing in that direction4.
One day, with AGI machine intelligence and human intelligence will be indistinguishable!
“So integrated into our lives has technology become, that it is sometimes difficult to know where the user ends and technology begins. Technology has become a functional extension of our human selves.”
Commentary: AI Ethics
Dr David Tuffley, Senior Lecturer in Applied Ethics and Sociotechnical Studies, School of ICT, Griffith University
Artificial Intelligence (“AI”) in the twenty-first century is a powerfully disruptive technology, one whose influence in society is growing exponentially. It is a technology with the potential to bring enormous benefit, but also great harm, if not properly managed.
How then may we reap the benefits of AI while ensuring we are not harmed by it? How do we frame the correct relationship with AI to ensure the primacy of human dignity as technology in general accelerates exponentially into the future?
I assert that AI is neither good, nor bad in and of itself. It is simply a tool, an extension of human intelligence — not an externalised threat to be feared as presented in popular culture. Clearly, it is the strategic uses to which AI is put that determines its value. The potential abuses of AI — for example — in rogue autonomous weapons, are a manageable risk and should not place unreasonable restraint on its development when the potential benefits arguably much outweigh the harm.
So integrated into our lives has technology become, that it is sometimes difficult to know where the user ends and technology begins. Technology has become a functional extension of our human selves, and it is not uncommon for people to look at their smartphone a hundred times a day.4 With this degree of dependence, a person is already a functional cyborg: a blend of humanity and technology.
One does not need to have the technology implanted internally to be a cyborg: it is enough that a person extends their mind into it.6
AI is a recognised technology in its own right, but perhaps more importantly, it is an enabler of other technologies, many of which may not have been even invented yet. AI is already integral to many of the functions in smartphones.
Amazon’s Alexa, amongst others, are early generation AI-enabled digital assistants designed to help people organise their everyday lives while communicating with their computer in natural language.
AI in the workplace can greatly improve job performance. In Japan, the first recorded case of AI saving someone’s life was recently seen when a woman with a rare form of leukaemia was initially misdiagnosed by a team of human doctors.7
A diagnostic AI was put to work, and in under 20 minutes, had analysed the woman’s genome, compared it with over 20 million oncological studies, arrived at the correct diagnosis and recommended a treatment regime, which was subsequently proved correct. It was the combination of human doctors and a diagnostic AI that succeeded, where human doctors alone had failed.
“AI is a recognised technology in its own right, but perhaps more importantly, it is an enabler of other technologies, many of which may not have been even invented yet.”
“Whilst there are obvious ethics considerations and risks, a prosperous future with improved quality of life depends on us coming to terms with the challenges of AI. What is particularly important however, is for us to pay attention to the dynamic tension that is generated – as we make the transition from a human to “post human” society.”
More recently, a macaque monkey called Pager successfully played a game of Pong with its mind. While it may sound like science fiction, the demonstration by Elon Musk’s neurotechnology company Neuralink is an example of brain-machine interface in action. A coin-sized disc called a “Link” was implanted by a precision surgical robot into Pager’s brain, connecting thousands of micro threads from the chip to neurons responsible for controlling motion.
As a similar concept to that of Dr Peter Scott-Morgan, this is another example demonstrating how AI and brain-machine interfaces can be used to help people who are paralysed, with spinal or brain injuries, as well as those with prosthetic limbs. As an extension of human intelligence, such therapeutic innovations would give those with disability the liberating experience of doing things by themselves again, and thus, bring tremendous benefit to humanity.
In terms of a set of universally applicable rules for ethical technology use – I propose the following principles that draw upon the earlier work of philosopher Immanuel Kant, whose ideas continue to exert a strong influence on ethics today. These simply stated principles are general enough to work in both the virtual and physical world. Their simplicity gives them universality, while making their intention transparent. Thus, I propose that Kant’s categorical imperatives be adapted for ethical technology development and use. Categorical imperatives are unconditional requirements that are always true:
- Before I do something with this technology, I ask myself: would it be acceptable if everyone did it?
- Will this technology harm, diminish or dehumanise anyone, including people I don’t’ know and will never meet?
- Do I have the informed consent of those who will be affected?8
If the answer to any of these questions is ‘no’, then it is arguably unethical to use technology in that manner. If developed and used according to ethical principles, technology in general has the potential to help individuals approach a fuller expression of their human potential. The critical issue is to institutionalise principles for ethical technology use in the AI development community to promote life-affirming uses.
Whilst there are obvious ethics considerations and risks, a prosperous future with improved quality of life depends on us coming to terms with the challenges of AI. What is particularly important however, is for us to pay attention to the dynamic tension that is generated – as we make the transition from a human to a “post human” society.
An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard (Final Report). March 2020 Published by Standard Australia and Commissioned by The Department of Industry, Science, Energy and Resources.
See: Responsible AI and data privacy, inclusion and fairness in AI models, selection bias in AI models, and the need for a set of AI ethical guidelines for the responsible use of AI, for
A/Prof. Alan Wee-Chung Liew’s contribution, p 26 & 30.
Tuffley, D (2019). Human Intelligence+Artificial Intelligence=Human Potential. Griffith Journal of Law and Human Dignity, Section VI: A New Relationship. Source: https://research-repository.griffith.edu.au/handle/10072/385881
Tuffley, D (2021). Neuralink’s monkey can play Pong with its mind. Imagine what humans could do with the same technology. Source: https://theconversation.com/neuralinks-monkey-can-play-pong-with-its-mind-imagine-what-humans-could-do-with-the-same-technology-158787
Produced by The Hopkins Centre
Research for Rehabilitation and Resilience
A joint initiative of Griffith University, Menzies Health Institute Queensland, Metro South Health and the Queensland Government.
View a PDF Version: Peter 2.0: The Human Cyborg
Visit our website at:
 Renton, A.I., Mattingley, J.B. & Painter, D.R. Optimising non-invasive brain-computer interface systems for free communication between naïve human participants. Sci Rep 9, 18705 (2019). https://doi.org/10.1038/s41598-019-55166-y
 Pinto, D., Garnier, M., Barbas, J., Chang, S. H., Charlifue, S., Field-Fote, E., . . . Heinemann, A. W. (2020). Budget impact analysis of robotic exoskeleton use for locomotor training following spinal cord injury in four SCI Model Systems. J Neuroeng Rehabil, 17(1), 4. doi:10.1186/s12984-019-0639-0
 Pham C, Liew A.W.C, Wang S.L. (2021). “Continual learning without forgetting via online triplet rehearsal”, manuscript under review.
 Baum, S (12 November 2017), “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”, SSRN, SSRN 3070741, Global Catastrophic Risk Institute Working Paper 17-1
 Willard, S (2019). ‘Study: People check their cell phones every six minutes, 150 times a day’, Elite Daily (online, 11 April 2019). https://www.elitedaily.com/news/world/study-people-check-cell-phones-minutes-150-times-day
 Clark, A and Chalmers, D (1998). ‘The Extended Mind’, Analysis 7 , 58(1). https://www.jstor.org/stable/3328150
 Tuffley, D (2016). ‘How can Doctors use Technology to Help them Diagnose?’, The Conversation (Online, 25 Octoer 2016). https://theconversation.com/how-can-doctors-use-technology-to-help-them-diagnose-64555
 Kant, I (2014). Fundamental Principles of the Metaphysic of Morals. (eBooks@Adelaide, 2014) [Tr Thomas Kingsmill Abbot].
The Hopkins Centre
Menzies Health Institute QLD
0478 709 990