Chris Barry returns with his second installment on our robot overlords, this time exploring robot intelligence and robot skin.
Guest blog by Chris Barry
Chris Barry returns with his second installment on robots, artificial intelligence and human enhancement. Chris considers himself as a bit of a polymath, but probably something closer to a polywog. He has interests across science, skepticism, current affairs, politics, social media,technology and the arts. He is in the middle of helping organise the 2013 National Skeptics Convention, being held in Canberra 22-24 November 2013. He has written for many and varied audiences over his and we are glad he has obliged us with a series of blogs exploring the potential and awesome future of enabling technologies. His first series (part 1 here) explores robotics and artificial intelligence and its implications for human enhancement. We are on a steady drive toward singularity. Do we really want to get there, if indeed we even can?
In part 1, we looked at the development of military and humanoid robots. In this instalment, we are going to examine robot intelligence and robot skin.
In part 1, we looked at the development of military and humanoid robots. In this installment, we are going to examine robot intelligence and robot skin.
I think, therefore I am. Am I?
Way back in 2006, the Australian government funded a 5-year cross-disciplinary project that brought together neuroscience, artificial intelligence, proteomics, robotics and computer science under an initiative called Thinking Systems. The goals of this project were to support the next generation and application of new knowledge in the development of intelligent machines, robots and information systems, placing Australia at the forefront of this area internationally.
The results of this initiative were presented at the Powerhouse Museum in December 2011 under the title “Thinking Systems Frontiers: Intelligent Machines, Robots, Human-Computer Interaction and the Science-Arts Nexus”. Three key areas were presented:
- Autonomous control using a brain-like system
- Cognition and communication
- Navigation through real and conceptual spaces
(**Unfortunately the report presented at Powerhouse Museum appears no longer available so I can’t give you a reference, but here is something else from the University of Queensland that is interesting.)
The research also explored concepts such as emergent behaviour, sensory systems, emotion detection, machine learning and memory. The results are pretty exciting, and I recommend you check out the abstracts. We’re close to building a T-800, but there is still a huge chasm between the best artificial brain and the average human brain (among other issues).
Over at MIT, a neuroscientist is attempting to bridge this chasm by reverse engineering the human brain, neuron by neuron. However, we shouldn’t hold our breath on this project, as it will take over a decade to complete (although quantum computers may deliver a huge performance leap to accelerate this work). [TechNyou note – this is assuming quantum computers have been developed within 10 years – which is highly unlikely.] Simply imaging a single human brain with electron microscopes will yield about one zettabyte (1023) of data, which roughly equals the world’s current volume of digital content. Not to mention the potential for this project to be a mechanism for a technological singularity, a term that refers to a hypothetical future emergence of greater-than-human intelligence. The singularity is what happened when the humans turned on Skynet – not cool.
Meanwhile, scientists from Northwestern University wrote a paper called “Networks in Motion,” that examined the theory of networks. Networks govern communication, growth, herd behaviour, and other key processes in nature and society, and are becoming increasingly responsive to modelling, forecast, and control. This means that we will be able to exploit network theory to engineer new systems in areas such as synthetic biology and microfluidics, which could be radically improved, and also enhance established systems such as traffic and materials research. What this means for robots is a brain that can out-compute the puny human without raising a bead of binary sweat.
Artificial Intelligence has already become part of our phones with the advent of Siri. More impressively, Watson, a computer built by IBM, has beaten our best minds in a game of Jeopardy. Watson, designed for complex analytics, integrates massively parallel processors, 16 Terabytes of RAM, parallel processing 90 servers to quickly solve naturally spoken questions of knowledge. Watson can process 500 gigabytes, the equivalent of a million books, per second. I can barely remember the last Harry Potter movie.
The T-800 was covered by a synthetic-biological skin that had the main task of providing camouflage for the metal endoskeleton. The T-800 seemingly doesn’t need skin to operate, although it can be assumed that it loses certain functionality when the skin is removed, including the ability to blend in at the gym.
Recent advances make the possibility of a “skin” on robots a definite reality. One such development is electronic skin, about the thickness of a human hair that can monitor health, wirelessly transmitting data to your cell phone and your doctor’s office. This development is a huge improvement on our current ability to monitor health, which relies on infrequent visits to the doctor and large machines that go *ping*.
One of the key benefits of our skin is its ability to self repair. Skin gets this ability by maintaining a constant blood supply, delivering oxygen, white blood cells and other important repair and defence mechanisms – constantly refreshing, repairing, defending. If you have a scar, you can see that without a blood supply, skin is susceptible to ongoing damage and infection. Materials are in development that are self healing. The materials change colour when damaged and repair themselves when exposed to visible light or changes in temperature or pH. The skin can also fix itself multiple times. A type of rubber has also been developed that repairs when the broken ends are pinched together.
It is not implausible to imagine that a future robotic skin will be far more advanced than the T-800 variety, looking realistic and playing a key role in sensory input and feedback. It doesn’t seem too farfetched to imagine this skin being developed in the next 20 years.
Building your own T-800
Based on the technologies discussed in this article, how close are we to building a T-800? My best estimation is that by 2040, something like a T-800 should be walking around – hopefully serving our coffees and not plotting our downfall. It is not too much of a stretch to see how ASIMO could become Arnold in another 28 years. Using Moore’s Law, incorporating a dash of network theory and adding a touch of quantum computing, it is reasonable to conclude that the computing technology required will be both advanced enough and small enough to control a robot on a mission from the future to kill Sarah Connor… or do the washing up, pick up the kids from school and take the garbage out.
I didn’t even discuss the potential of carbon nanotubes and graphene to provide significant improvements in strength and speed, and with the recent creation of silicene (single atom silicon nanosheets) and the potential for integration with current computing hardware, improvements could be by orders of magnitude.
The major issue I perceive is power. A battery that can recharge quickly and last for days and weeks is not currently on the cards and incremental improvements will not deliver by 2040. It could be our one saving grace from ultimate “Termination”, or the major hurdle to a horde of robot assistants.
In my next article, I will examine the T-1000, a mimetic poly-alloy (liquid metal) robot whose science is based in the exciting world of nanotech.