Later this month the UN will discuss the possibility of autonomous killing machines at a convention on weaponry in Geneva. They are essentially talking about Terminators or drones from the Iron Man films (pictured), that is killer robots that don’t require human involvement or decision making. We could be in serious trouble if Google (AKA Skynet?) decide to get involved, or maybe they already are?
Isn’t it crazy to think that we’re considering developing and building robots designed to, quite literally, kill ourselves. Surely it would only be a matter of time before they got into the wrong hands or experienced a life-destroying software malfunction. It’s hard to know how advanced technology has become in the most secretive and well-funded laboratories around the world, namely those involved in military projects. We can be reasonably sure these technologies do not yet exist, but equally sure they are close to being a possibility, if not a reality. The weapons experts talking in Geneva are therefore attempting to pre-empt killer robot manufacturers and impose a ban that will ensure the safety of us all.
This video basically sums up my previous article on Phonies being distracted by technology and failing to engage with friends and family. The message is that we should all “look up” from those screens and see what is happening around us, talk to people and live our lives. We need to use our phones in moderation and be aware of the appropriate times to use them, or more likely, to keep them firmly in our pockets.
Over at the Singularity University in Silicon Valley, California (of course), some of the brightest (and richest at $29,500 per 10 week course) technology-loving futurists gather to discuss, imagine and create. Ray Kurzweil, co-founder of the ‘university’, has recently been speaking about some of his work as the Director of Engineering at Google. He describes their current mission as “reengineering the human brain” in such a way that we can eventually connect it to the internet, which he predicts will be realised in the 2030’s. As a leader in this field and proven predictor of such things as the year a computer would beat a human chess grand master and the explosion of the internet, it’s hard not to take his word for it.
Perhaps the most shocking part of Kurzweil and Google’s work is not that they are trying to hook us up to the cloud by inserting nanobots into our brains, but the potential resulting brain power that could come from such a process. Were it done correctly, and by that I mean accurately mimicking existing brain connections and hierarchical structure, then it could create a super-intelligent network of brains. Imagine linking the neocortex of the members of a lab group, allowing them to more efficiently trade ideas, innovate and discover using a ‘multi-brainstorm’ approach. Teaching would be transformed – think Leonardo DiCaprio and Ellen Paige designing radical architecture while dream-sharing in the film Inception – but in danger of imposing ideas instead of just presenting them.
Picture world leaders plugged into each other’s heads discussing the future on behalf of the rest of us. Could they not easily do away with transparency, leaving the public out of the loop? Or would it allow great collaboration, democracy and openness in politics and worldwide, unified action? There is also danger that the experience of linking minds itself may be so overwhelming that any group risks an explosion of power-thirst and ambition from among its members. More likely not.
There is no shortage of volunteers to scout, pioneer, and trial new technologies, even when there are unresolved ethical dilemmas and questionable futures. The “Explorers” that bought the first Google Glass models and have been using them ever since are just one such example – cameras in contact lenses could be next. Whatever new tech is released there is always someone willing to test it, so the progression towards greater technological dependence and enhancement of humans is in some ways inevitable (if such things are mechanistically feasible, which they probably are).
How do you clone a dog like they did on the Channel 4 programme “The £60,000 Puppy: Cloning Man’s Best Friend” shown this week? It might seem obvious to some, but for the great majority cloning sounds like something out of science fiction. Let’s break it down as simply as possible. DNA is the recipe that is used by almost all of life on Earth. Every single animal that has ever lived started as a single cell and developed into an adult by following, very precisely, the instructions in its DNA. Using delicate processes scientists are now able to remove the DNA from an embryo, leaving a healthy but information-less cell. The DNA is then taken from a living individual and inserted into the empty cell, replacing that which was removed and giving it instructions to create an adult. This embryo is then implanted in a female and the pregnancy and growth begins.
So an individual’s unique DNA specifies precisely how to ‘make’ that individual from the starting point of an embryo. By replacing the recipe from one embryo with that of a living individual, a clone is produced that is essentially an identical twin, just with a different birthday to the original. This does not mean that the clone is the exact same individual – just look at typical identical twins – they are made individual by the environment they experience through their lives.
Cloning cannot recreate an individual’s personality; it cannot make a copy of a person; it cannot bring someone back from the dead or let them live forever. However, it could be used to exclusively breed the most desirable and valuable animals for agriculture, albeit at greater risk of epidemics.
So I’ve recently realised some people are much less sociable in group situations than they surely were when phones weren’t quite so versatile. These people, I like to call ‘Phonies’, are constantly sidetracked by their phones; the temptations and draws of social media and nifty, ‘time-saving’, trending, gadget-like apps. Constantly engaging with dozens of little ‘helpful’ apps can save seconds and make a day run smoothly, but they come at a cost to friendly interaction. Often Phonies don’t listen to your story, thoughts or questions, they just vacantly stare at their screens instead. In extreme circumstances Phonies appear to distance themselves from the group at times by busily checking what people are tweeting about or what they’ve got up to and posted on facebook or instagram. I say it’s simply not worth it. We desperately need to be able to separate our time spent engaged with our phones and time spent fully engaged with our friends! Otherwise we’ll just continue this demise and eventually won’t be bothering to talk much at all. And it’s infuriating and boring to be around friends who are full or part-time Phonies.
I would almost go so-far as to say that I’m offended that a shiny LED screen can sway someone’s attention from talking to me. And unfortunately the rise of Phonies has the potential to spiral out of control, especially as phone use can be contagious. One person engages with their phone and others get bored/offended/jealous and soon immerse themselves in their own tech, leading to conversational inertia. And the young may be at greatest risk due to their high exposure to tablets and games at home (see this worrying article), resulting in the incomplete development of their basic abilities to socialise, befriend and network. Result: a world full of Phonies, like the two depicted below in a new piece by Banksy called “Mobile Lovers”.
Will we augment ourselves and become one with machines as futurist Ray Kurzweil (pictured below) predicted many years ago for 2029? It definitely looks like it. Think about it, you already increase your social interactions on a daily basis using a handheld supercomputer. We spend huge chunks of our day browsing, checking facts and ‘socialising’ on the internet. We even have virtual versions of ourselves that allow others to meet and learn about us while we sleep. It is not a question of if we are going to become one with technology – our lives are already heavily reliant and interwoven with computers and electronics and this is only going to become more efficient and subtle. The result? Someone who looks totally normal but has the ability to draw from a vast database of knowledge and communicate online without a sweat. But we don’t seem to be able to multitask at present so I would imagine this will lead to even less attention being paid to reality, despite the intentions of more seamless usage. Let’s just hope we maintain the ability to relax and interact physically with others, else life might lose its joy.