Dont Worry. . . You’re Already A Robot, Part 2: Navigating (virtual) Reality. 

In the first part of this series, we took a look at how our perception of self is a computer-generated and virtual artificial-agent, at how the most fundamental ( and electronically replicable) processes that generate that internal voice we consider our consciousness is little more than the presence or absence of an electrical discharge, little more than a 1 or 0. 

But that amalgamation of synaptic firing that creates self only covers the internal side of things. Obviously, though, there’s that chaotic external realm we all share that must be considered. 

But as any good philosopher knows, you’ve got to define a thing before you can explore it. So what is that external realm? What exactly “is” reality? Well, we won’t go too deep down the existential crisis that is that rabbit hole, but there is one thing we are pretty damn sure about at this point:

And that’s that reality is merely energy vibrating a varying speeds.

This energy is grouped together by their wavelengths, and together they paint our perception with the pigment of which that particular frequency has come to represent for our curious, ape-evolved eyes. 

11194600_10100899329673430_1265456355785498250_o

But how do we interact with this swimming pool of vibrations? If we are to do anything with it, we need to be able to break it down, analyze it, and do what humans do best: impose order through definitions and categorization.

But first, we need some way to input information about our surroundings. And that, my dear friends, we do with senses and symbols.

Now our eyes aren’t only good for distinguishing the frequencies, as I mentioned above, but also for capturing them. They are one of our five input systems. And with our camera-like eyes recording our environment, we can feed into our brain those frequencies that are resonating with each other, those vibrating photons that are grouping themselves into a distinguishable form. And thus your machine is now given input:  a symbol.

(Want to skip the laymen’s symbol discussion? Jump to the deeper philosophy.)  

Biological Journey of the Symbol

Now biology can begin it’s process of reality interpretation.

That symbol journeys from your eyes into your brain. You consider it’s shape and size, and then access your memories to consider everything you know about that symbol. If you’ve had bad past experiences with said symbol you may become anxious or fearful. If you’ve had good experiences with that symbol you will likely become excited and happy. Synapses fire as you build a reactive perception of the symbol: is it good or bad, do you observe or ignore it, stay put or move closer or run away, touch or don’t, stroke or pat–all past reactions you realized you could use in response to that particular symbol or ones like it.

Based on those past experiences and your cultural programming, you make an educated choice; your brain fires the corresponding synapses, the signals carry downward through nerve endings to contract and expand muscles, neurotransmitters read the signals and release chemicals proportionate to the need of your response, and the cycle continues, moment to moment, symbol to symbol, ad infinitum.

 Example of a Symbol’s Biological Journey:

You come across a bear. The symbol “bear” is captured by your eyes and sent into your memories like a command to retrieve data from a database. Your think of all relevant memories(which includes learned knowledge) of bears.

This symbol is an easy one because its functions are quite clearly defined. Fear is perceived. Fight or Flight kicks in, but your memories ( including your education) remove the option of fighting (because you 90% sure you will lose) and so you are left with flight. Your neurotransmitters are alerted to this heightened state of fear and need for energy. Adrenaline courses to speed you away to safety, your legs receive a signal from your brain that constrict and expand their muscles, sending you into your run. (Note: Don’t run from bears. Play dead. Because in the words of Anchorman: “[it’s] a live bear and it will literally rip your face off.“)

Robotic Journey of a Symbol

Now let’s see how this would look if we could technologically replicate, as though we were robots, this sense and symbols process we’ve discussed so far.

Well naturally, a variety of frequencies are still resonating within and all around you at all times. But now we use your machine senses ( sensors like microphones, pressure pads, cameras,etc ) to capture their form, to categorize them as symbols and patterns.

Your database is then queried, a request to your digital memories to return all known data about this perceived pattern (just like pattern recognition software). Your CPU (your brain, your Central Processing Unit, your virtual agent) yearns for a desired outcome ( based on programming up to that point) in response to the retrieved attributes of that symbol. Then a probability algorithm (much like critical thought) runs in order to suggest the best route towards your desired outcome. Once that decision is made, the CPU invokes the processes to accomplish your goal, by activating the neurotransmitter and movement modules. This could be as simple as calling up an algorithm that manipulates the 1’s and 0’s of your brain in a way that corresponds to a certain dosage of key chemicals like epinephrine(andrenaline) or serotonin or melatonin. And rather than sending an electrical impulse along nerves to move muscles, we can simply command pistons and actuators, etc to move the desired amount.

MIT has already shown how far we’ve come in proving my point.  This robotic cheetah has the ability to sense its surroundings enough to jump random obstacles, an extremely dynamic and difficult ability that had originally been reserved for our flesh and bone species since the dawn of the planet:

So now we know we can replicate the process.

Now let’s explore why reality’s symbols can be just as meaningful to a robot as they are to a human. 

You see, our senses are just biological equivalents of our (future) robot-brain’s I/O ( input /output) systems. We’ve already proven we can replicate these senses with technology (ears are like microphones, eyes are like cameras, etc), and moreover, we’ve proved we can replicate them quite well considering how easy it is to distinguish familiar/unique voices and locations when you’re on the phone or watching TV; your friend still sounds like your friend on the phone, and that picture of them still looks like them. And that’s because the engineers who created these technologies realized reality and our biology is nothing but data, frequencies whose vibrating states we can measure and capture snapshots of.

Frame by frame, sound-bite by sound-bite.

And let’s remember that when we’re talking about robots doing this calculating, we’re talking about digital minds. And digital minds don’t need to be restricted to physical bodies anymore than… well… we do. Our robot minds can also live inside virtual reality(a great deal of evidence suggest that we might already be living in a simulation; ie: the Matrix).

If reality’s symbols are just data that our robotic mind can process the same way as a human mind, would we know the difference between being inside virtual reality and being in the real world?

Considering that we already live in a time where digital symbols can impact our brains just as much, if not more, than “natural” ones, it would seem the answer is: no, it doesn’t seem like there would be a difference.

Think about it: how many hours do you typically stare at screens in your day to day life? How much time do you live inside portals that puncture through time and space to capture those far-off frequencies so they can bring you immersion into their symbolic meaning? Have you recently taken a trip down memory lane through facebook photos, explored foreign lands through video, or partaken in a Skype conversation? Do you feel robbed by these experiences, or fortunate that you get to transcend time-space’s limitations to enjoy friends, family, geographies, cities, and stories that lay outside the scope of your present forms limited biological senses?

I imagine you feel fortunate and uplifted by these experiences, otherwise why would you watch a movie or listen to an album? 

So then I pose to you, if you were to be an android or a virtual avatar, are you truly losing anything if the your mind still fires it’s reactions to the symbols in the same way? Do you fear and shun electronic recordings of Beethoven or Bach because it’s too robotic? Or do you swoon with emotional catharsis? Does it matter if it’s a pixel or a frequency if the colors and forms are the same as they enter your visual sensor, whether it’s an eye or a camera? Do you still feel emotionally moved by old family movies, or did you cry at the end of Titanic? Did you feel a chill when William Wallace yelled “Freedom” in Braveheart? 

I’m sure many of you are thinking: “Oh, but Steven, it’s not the same. Exploring the Grand Canyon on foot, in real life, is far different from watching a drone fly through it.”

And you’re absolutely right. For now. But the point isn’t how good the technology is right now in terms of making us feel like we’re actually there. The point is not to be afraid of the future because current technology hasn’t perfectly replicated the senses.

The point is exposing the similarities, in pointing out that we’re not losing our humanity as we port our lives into technology, that we can already replicate the process of navigating reality with technology, and that it’s already such a good substitute that we’ve already fallen in love with what its given us. We lose ourselves in it. It triggers our emotions, inspires us, teaches us, even angers us. Think about videos of Hitler, of WWII, of Tiananmen square, of speeches by MLK or JFK, of watching the Apollo missions shoot off into space. These clips of our world that have driven men and women to rage or tears, encouraged generations of revolution or quelled hatred– it was all pixels, all data, all 1s and 0s packaged in a way that allowed them to be symbols in your eyes and ears.

So again, as we better the technology, delivering the symbols directly through cameras or virtual sensors that feed directly into your robotic brain rather than passive viewing on a screen, do you feel like we losing something?

 This is our cocoon phase; why end the transformation before we finish our transformation?

Or do you feel like we’d be gaining, taking something that already has brought us so much and making it even better?

If you fear the loss of vividness, the loss of some tangible sensations, then I suggest that rather than slow us down from improving the technology with fear, we should embrace it, promote it, and guide it so that we can take our current films/music/communications/etc out of this primitive state of merely perceiving and move them into a state of feeling, into a state of complete immersiveness, a state where we feel the settings of our films, feel the musical vibrations, be given the opportunity to feel our lovers as we hold them atop a virtual mountainscape rather than stare at them on Skype through a 2D screen.

You must not forget that we, as a species, have already chosen to live inside a technological world. If you ever watch movies, listen to any non-live music, talk on the phone, text, or use email, then you’ve already embraced the transformation into a new species. So why limit ourselves now when we’re on the cusp of making it better than ever? Especially when the very aspects we like the least about our technological zeitgeist ( the lack of feeling) are the very problems we are on the verge of fixing? Your fear of change now would leave us in this dangerous transitional state without the utopian boons of an advanced technological future. 

If we can work together to take this next step into sensors interpreting patterns, then we gain the ability to navigate reality as robots, virtual worlds as digital minds, constantly inputting, analyzing, and deciding. Losing none of the sensations we have now, but gaining the ability to escape our physical confines that restrict people from seeing our planet, from traveling too far from their homes and jobs, from seeing other cultures, from opening their minds and exploring inspirational vistas.

 If we embrace this transformation, we can allow people to work from anywhere, live anywhere, see anyone whenever they want. And while people argue the journey is what makes life worth living, I wonder if those people have ever had lovers, family members, or friends who lived on the other side of the planet. Because I know for me, if I could upload into a shared space with those people via a thought everyday rather than riding in a plane for 24 hours to see them once every few years, my time with them would still mean the world to me. They wouldn’t mean less to me because it was easier to see them. Just like reality won’t be less meaningful when we make it easier to navigate.