Uploading A Brain To The Cloud

Doesn’t it sound cool when you read a book or watch a movie about someone uploading a mind or a brain onto a computer? That stuff from science fiction is more real than you can imagine! This doesn’t necessarily have to be scary, I’m not talking about taking your brain or mine from its place, but rather developing something similar to a brain, that behaves and performs tasks similar to a brain, and sending that on to the cloud; basically this is all about doing some machine learning on the cloud!

So what’s this brain then? Modern machine learning algorithms are intended to help supplement human activities; as such, they help perform some of the more mundane tasks or more complex ones such as decision making, recommendation, pattern recognition, object identification, prediction, or natural language understanding.

Developing your equivalent of such a tool is now more than easy, my favorite programming language is Python, and in Python, my two favorite frameworks are sklearn for simpler machine learning and Tensorflow for deep learning. Even though training your equivalent of a human eye and temporal lobes for object detection requires a great deal of processing power, the cloud, distributed computing, and GPU acceleration offer a solution to that problem.

So what is the technology behind all that? It is becoming exceptionally easy to develop a machine learning model, with just a few lines of code, one can, for example, build a model that recommends a t-shirt based on historical purchases:

from sklearn.neighbors import NearestNeighbors
import numpy as np
import pandas as pd

train_data = pd.read_csv("data.csv") 
model = NearestNeighbors(n_neighbors=5, algorithm='auto').fit(train_data)
X = np.asarray([1, 0, 25000, 1]).reshape(1, -1)

similar = model.kneighbors(X, 5, return_distance=False)
print similar

The small snippet of code above takes in a comma-separated values file as an input, creates a nearest neighbor classifier, and classifies a sample dataset X. Training this model to understand the data is not computationally difficult, but a more advanced model would require more power. I typically use IBM’s Data Science Experience for the more complex end-to-end solutions because you can connect a Watson Machine Learning service and deploy your code to the cloud (i.e. upload a brain) with another small script:

from repository.mlrepositoryclient import MLRepositoryClient
from repository.mlrepositoryartifact import MLRepositoryArtifact
from repository.mlrepository import MetaProps, MetaNames

wml_credentials = {
"url": "https://ibm-watson-ml.mybluemix.net",
"access_key": "***",
"username": "***",
"password": "***",
"instance_id": "***"

ml_repository_client = MLRepositoryClient(wml_credentials['url'])
ml_repository_client.authorize(wml_credentials['username'], wml_credentials['password'])

model_artifact = MLRepositoryArtifact(model, name="My Model")
saved_model = ml_repository_client.models.save(model_artifact)

This is the beauty of the current technology landscape. These are things that a short while back sounded crazy are no easily doable and things that sound from science fiction are a reality. At this rate, and with the rise of quantum computing, I think that soon, we will really be able to develop something close to a human brain.

Also published on Medium.

Tags from the story
, ,
Written By
More from Aoun

Pain in the Data

I have been working on a data analytics project for around 3...
Read More