Numerai – deep learning example code.

In a previous post on Numerai, I have described very basic code to get into a world of machine learning competitions. This one will be a continuation, so if you haven’t read it I recommend to do it- here. In this post, we will add little more complexity to the whole process. We will split out 20% of training data as validation set so we can train different models and compare performance. And we will dive into deep neural nets as predicting model.

Ok, let’s do some machine learning

Let’s start with importing what will be required, this step is similar to what we have done in the first model. Apart from Pandas, we import “StandardScaler” to preprocess data before feeding them into neural net. We will use “train_test_split” to split out 20% of data as a test set. “roc_auc_score” is a useful metric to check and compare performance of the model, we will also need neural net itself – that will be classifier from ‘scikit-neuralnetwork’ (sknn).

Imports first:

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sknn.mlp import Classifier, Layer

As we have all required imports, we can load the data from csv(remember to update the system path to downloaded files):

train = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_training_data.csv")
test = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_tournament_data.csv")
sub = pd.read_csv("/home/m/Numerai/numerai_datasets/example_predictions.csv")

Some basic data manipulation required:

sub["t_id"]=test["t_id"]
test.drop("t_id", axis=1,inplace=True)
labels=train["target"]
train.drop("target", axis=1,inplace=True)
train=train.values
labels=labels.values

In next four lines, we will do what is called standardization. The result of standardization (or Z-score normalization) is that the features will be rescaled so that they’ll have the properties of a standard normal distribution with μ=0 and σ=1.

scaler = StandardScaler()
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)

Next line of code will split original downloaded train set to train and test set, basically we set aside 20% of original train data to make sure we can check the out of the sample performance – to avoid overfitting.

X_train, X_test, y_train, y_test = train_test_split(train,labels, test_size=0.2, random_state=35)

Having all data preprocessed we are ready to define model, set number of layers in neural network, and a number of neurons in each layer. Below few lines of code to do it:

nn = Classifier(
layers=[
Layer("Tanh", units=50),
Layer("Tanh", units=200),
Layer("Tanh", units=200),
Layer("Tanh", units=50),
Layer("Softmax")],
learning_rule='adadelta',
learning_rate=0.01,
n_iter=5,
verbose=1,
loss_type='mcc')

“units=50” – states a number of neurons in each layer, number of neurons in first layer is determined by a number of features in data we will feed in.

“Tahn” – this is kind of activation function, you can use other as well eg. rectifier, expLin, sigmoid, or convolution. In last layer the activation function is Softmax – that’s usual output layer function for classification tasks. In our network we have five layers with a different number of neurons, there are no strict rules about number of neurons and layers so it is more art than science, you just need to try different versions and check what works best.

In our network we have five layers with a different number of neurons, there are no strict rules about a number of neurons and layers so it is more art than science, you just need to try different versions and check what works best. After layers we set learning rule to ‘adadelta’ again more choice available: sgd, momentum, nesterov, adagrad or rmsprop just try and check what works best.

“learning_rule=’adadelta'” – sets learning algorithm to ‘adadelta’, more choice available: sgd, momentum, nesterov, adagrad or rmsprop just try and check what works best, you can mix them for different layers.

“learning_rate=0.01” – learning rate, often as rule of thumb you start with ‘default’ value of 0.01, but other values can be used, mostly anything from 0.001 to 0.1.

“n_iter=5” – number of iterations ‘epochs’, the higher the number the longer process of learning will take, 5 is as example only, one need to look at error after each epoch, at some point it will stop dropping, I have seen anything from 50 to 5000 so feel free to play with it.

“verbose=1” – this parameter will let us see progress on screen.

“loss_type=’mcc’ ” – loss function, ‘mcc’ typical for classification tasks.

As the model is set, we can feed data and train it, depending on how powerful your pc is it can take from seconds to days. It is recommended to use GPU computing for neural networks training.

nn.fit(X_train, y_train)

Below line validates the model against 20% of data we have set aside before.

print('Overall AUC:', roc_auc_score(y_test, nn.predict_proba(X_test)[:,1]))

Using above code we can play around with different settings and neural networks architectures, checking the performance. After finding the best settings, they can be applied for prediction to be uploaded to Numerai, just run last three lines(just remember to update system path to save the file):

y_pred = nn.predict_proba(test)
sub["probability"]=y_pred[:,1]
sub.to_csv("/home/m/Numerai/numerai_datasets/Prediction.csv", index=False)

I hope above text was useful and you can now start playing around with deep learning for trading predictions for Numerai. If you have any comments or questions please feel free to contact me.

Full code below:

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sknn.mlp import Classifier, Layer

train = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_training_data.csv")
test = pd.read_csv("/home/m/Numerai/numerai_datasets/numerai_tournament_data.csv")
sub = pd.read_csv("/home/m/Numerai/numerai_datasets/example_predictions.csv")

sub["t_id"]=test["t_id"]
test.drop("t_id", axis=1,inplace=True)

labels=train["target"]
train.drop("target", axis=1,inplace=True)

train=train.values
labels=labels.values

scaler = StandardScaler()
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)

X_train, X_test, y_train, y_test = train_test_split(train,labels, test_size=0.2, random_state=35)

nn = Classifier(
layers=[
Layer("Tanh", units=50),
Layer("Tanh", units=200),
Layer("Tanh", units=200),
Layer("Tanh", units=50),
Layer("Softmax")],
learning_rule='adadelta',
learning_rate=0.01,
n_iter=5,
verbose=1,
loss_type='mcc')

nn.fit(X_train, y_train)

print('Overall AUC:', roc_auc_score(y_test, nn.predict_proba(X_test)[:,1]))

y_pred = nn.predict_proba(test)
sub["probability"]=y_pred[:,1]
sub.to_csv("/home/m/Numerai/numerai_datasets/Prediction.csv", index=False)

Was the above useful? Please share with others on social media.

If you want to look for more information on data science/statistics or trading, check some free online courses available at   coursera.orgedx.org or udemy.com.

Recommended reading list:

 

Data Science from Scratch: First Principles with Python

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn how many of the most fundamental data science tools and algorithms work by implementing them from scratch.

If you have an aptitude for mathematics and some programming skills, author Joel Grus will help you get comfortable with the math and statistics at the core of data science, and with hacking skills you need to get started as a data scientist. Today’s messy glut of data holds answers to questions no one’s even thought to ask. This book provides you with the know-how to dig those answers out.

Get a crash course in Python
Learn the basics of linear algebra, statistics, and probability—and understand how and when they're used in data science
Collect, explore, clean, munge, and manipulate data
Dive into the fundamentals of machine learning
Implement models such as k-nearest Neighbors, Naive Bayes, linear and logistic regression, decision trees, neural networks, and clustering
Explore recommender systems, natural language processing, network analysis, MapReduce, and databases