{
"cells": [
{
"cell_type": "raw",
"metadata": {
"ExecuteTime": {
"end_time": "2020-01-02T10:50:31.982740Z",
"start_time": "2020-01-02T10:50:31.976911Z"
}
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Machine Learning with vaex.ml\n",
"\n",
"If you want to try out this notebook with a live Python kernel, use mybinder:\n",
"\n",
"\n",
"\n",
"\n",
"The `vaex.ml` package brings some machine learning algorithms to `vaex`. If you installed the individual subpackages (`vaex-core`, `vaex-hdf5`, ...) instead of the `vaex` metapackage, you may need to install it by running `pip install vaex-ml`, or `conda install -c conda-forge vaex-ml`.\n",
"\n",
"The API of `vaex.ml` stays close to that of [scikit-learn](https://scikit-learn.org/stable/), while providing better performance and the ability to efficiently perform operations on data that is larger than the available RAM. This page is an overview and a brief introduction to the capabilities offered by `vaex.ml`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:33.509897Z",
"start_time": "2020-07-14T15:58:32.865015Z"
}
},
"outputs": [],
"source": [
"import vaex\n",
"vaex.multithreading.thread_count_default = 8\n",
"import vaex.ml\n",
"\n",
"import numpy as np\n",
"import pylab as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the well known [Iris flower](https://en.wikipedia.org/wiki/Iris_flower_data_set) and Titanic passenger list datasets, two classical datasets for machine learning demonstrations."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:35.184184Z",
"start_time": "2020-07-14T15:58:35.175838Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"df.scatter(df.petal_length, df.petal_width, c_expr=df.class_);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Preprocessing \n",
"\n",
"### Scaling of numerical features\n",
"\n",
"`vaex.ml` packs the common numerical scalers:\n",
"\n",
"* `vaex.ml.StandardScaler` - Scale features by removing their mean and dividing by their variance;\n",
"* `vaex.ml.MinMaxScaler` - Scale features to a given range;\n",
"* `vaex.ml.RobustScaler` - Scale features by removing their median and scaling them according to a given percentile range;\n",
"* `vaex.ml.MaxAbsScaler` - Scale features by their maximum absolute value.\n",
" \n",
"The usage is quite similar to that of `scikit-learn`, in the sense that each transformer implements the `.fit` and `.transform` methods."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:36.058105Z",
"start_time": "2020-07-14T15:58:36.021979Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
sepal_length
sepal_width
petal_length
petal_width
class_
scaled_petal_length
scaled_petal_width
scaled_sepal_length
scaled_sepal_width
\n",
"\n",
"\n",
"
0
5.9
3.0
4.2
1.5
1
0.25096730693923325
0.39617188299171285
0.06866179325140277
-0.12495760117130607
\n",
"
1
6.1
3.0
4.6
1.4
1
0.4784301228962429
0.26469891297233916
0.3109975341387059
-0.12495760117130607
\n",
"
2
6.6
2.9
4.6
1.3
1
0.4784301228962429
0.13322594295296575
0.9168368863569659
-0.3563605663033572
\n",
"
3
6.7
3.3
5.7
2.1
2
1.1039528667780207
1.1850097031079545
1.0380047568006185
0.5692512942248463
\n",
"
4
5.5
4.2
1.4
0.2
0
-1.341272404759837
-1.3129767272601438
-0.4160096885232057
2.6518779804133055
\n",
"
...
...
...
...
...
...
...
...
...
...
\n",
"
145
5.2
3.4
1.4
0.2
0
-1.341272404759837
-1.3129767272601438
-0.7795132998541615
0.8006542593568975
\n",
"
146
5.1
3.8
1.6
0.2
0
-1.2275409967813318
-1.3129767272601438
-0.9006811702978141
1.726266119885101
\n",
"
147
5.8
2.6
4.0
1.2
1
0.13723589896072813
0.0017529729335920385
-0.052506077192249874
-1.0505694616995096
\n",
"
148
5.7
3.8
1.7
0.3
0
-1.1706752927920796
-1.18150375724077
-0.17367394763590144
1.726266119885101
\n",
"
149
6.2
2.9
4.3
1.3
1
0.30783301092848553
0.13322594295296575
0.4321654045823586
-0.3563605663033572
\n",
"\n",
"
"
],
"text/plain": [
"# sepal_length sepal_width petal_length petal_width class_ scaled_petal_length scaled_petal_width scaled_sepal_length scaled_sepal_width\n",
"0 5.9 3.0 4.2 1.5 1 0.25096730693923325 0.39617188299171285 0.06866179325140277 -0.12495760117130607\n",
"1 6.1 3.0 4.6 1.4 1 0.4784301228962429 0.26469891297233916 0.3109975341387059 -0.12495760117130607\n",
"2 6.6 2.9 4.6 1.3 1 0.4784301228962429 0.13322594295296575 0.9168368863569659 -0.3563605663033572\n",
"3 6.7 3.3 5.7 2.1 2 1.1039528667780207 1.1850097031079545 1.0380047568006185 0.5692512942248463\n",
"4 5.5 4.2 1.4 0.2 0 -1.341272404759837 -1.3129767272601438 -0.4160096885232057 2.6518779804133055\n",
"... ... ... ... ... ... ... ... ... ...\n",
"145 5.2 3.4 1.4 0.2 0 -1.341272404759837 -1.3129767272601438 -0.7795132998541615 0.8006542593568975\n",
"146 5.1 3.8 1.6 0.2 0 -1.2275409967813318 -1.3129767272601438 -0.9006811702978141 1.726266119885101\n",
"147 5.8 2.6 4.0 1.2 1 0.13723589896072813 0.0017529729335920385 -0.052506077192249874 -1.0505694616995096\n",
"148 5.7 3.8 1.7 0.3 0 -1.1706752927920796 -1.18150375724077 -0.17367394763590144 1.726266119885101\n",
"149 6.2 2.9 4.3 1.3 1 0.30783301092848553 0.13322594295296575 0.4321654045823586 -0.3563605663033572"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"scaler = vaex.ml.StandardScaler(features=features, prefix='scaled_')\n",
"scaler.fit(df)\n",
"df_trans = scaler.transform(df)\n",
"df_trans"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The output of the `.transform` method of any `vaex.ml` transformer is a _shallow copy_ of a DataFrame that contains the resulting features of the transformations in addition to the original columns. A shallow copy means that this new DataFrame just references the original one, and no extra memory is used. In addition, the resulting features, in this case the scaled numerical features are _virtual columns,_ which do not take any memory but are computed on the fly when needed. This approach is ideal for working with very large datasets."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Encoding of categorical features\n",
"\n",
"`vaex.ml` contains several categorical encoders:\n",
"\n",
"* `vaex.ml.LabelEncoder` - Encoding features with as many integers as categories, startinfg from 0;\n",
"* `vaex.ml.OneHotEncoder` - Encoding features according to the one-hot scheme;\n",
"* `vaex.ml.FrequencyEncoder` - Encode features by the frequency of their respective categories;\n",
"* `vaex.ml.BayesianTargetEncoder` - Encode categories with the mean of their target value;\n",
"* `vaex.ml.WeightOfEvidenceEncoder` - Encode categories their weight of evidence value.\n",
" \n",
" The following is a quick example using the Titanic dataset."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:36.683376Z",
"start_time": "2020-07-14T15:58:36.671430Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
pclass
survived
name
sex
age
sibsp
parch
ticket
fare
cabin
embarked
boat
body
home_dest
\n",
"\n",
"\n",
"
0
1
True
Allen, Miss. Elisabeth Walton
female
29
0
0
24160
211.338
B5
S
2
nan
St Louis, MO
\n",
"
1
1
True
Allison, Master. Hudson Trevor
male
0.9167
1
2
113781
151.55
C22 C26
S
11
nan
Montreal, PQ / Chesterville, ON
\n",
"
2
1
False
Allison, Miss. Helen Loraine
female
2
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
\n",
"
3
1
False
Allison, Mr. Hudson Joshua Creighton
male
30
1
2
113781
151.55
C22 C26
S
None
135
Montreal, PQ / Chesterville, ON
\n",
"
4
1
False
Allison, Mrs. Hudson J C (Bessie Waldo Daniels)
female
25
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
\n",
"\n",
"
"
],
"text/plain": [
" # pclass survived name sex age sibsp parch ticket fare cabin embarked boat body home_dest\n",
" 0 1 True Allen, Miss. Elisabeth Walton female 29 0 0 24160 211.338 B5 S 2 nan St Louis, MO\n",
" 1 1 True Allison, Master. Hudson Trevor male 0.9167 1 2 113781 151.55 C22 C26 S 11 nan Montreal, PQ / Chesterville, ON\n",
" 2 1 False Allison, Miss. Helen Loraine female 2 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON\n",
" 3 1 False Allison, Mr. Hudson Joshua Creighton male 30 1 2 113781 151.55 C22 C26 S None 135 Montreal, PQ / Chesterville, ON\n",
" 4 1 False Allison, Mrs. Hudson J C (Bessie Waldo Daniels) female 25 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = vaex.ml.datasets.load_titanic()\n",
"df.head(5)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:37.279175Z",
"start_time": "2020-07-14T15:58:37.135444Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
pclass
survived
name
sex
age
sibsp
parch
ticket
fare
cabin
embarked
boat
body
home_dest
label_encoded_embarked
embarked_0.0
embarked_C
embarked_Q
embarked_S
frequency_encoded_embarked
mean_encoded_embarked
woe_encoded_embarked
\n",
"\n",
"\n",
"
0
1
True
Allen, Miss. Elisabeth Walton
female
29
0
0
24160
211.338
B5
S
2
nan
St Louis, MO
1
0
0
0
1
0.698243
0.337472
-0.696431
\n",
"
1
1
True
Allison, Master. Hudson Trevor
male
0.9167
1
2
113781
151.55
C22 C26
S
11
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
\n",
"
2
1
False
Allison, Miss. Helen Loraine
female
2
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
\n",
"
3
1
False
Allison, Mr. Hudson Joshua Creighton
male
30
1
2
113781
151.55
C22 C26
S
None
135
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
\n",
"
4
1
False
Allison, Mrs. Hudson J C (Bessie Waldo Daniels)
female
25
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
\n",
"\n",
"
"
],
"text/plain": [
" # pclass survived name sex age sibsp parch ticket fare cabin embarked boat body home_dest label_encoded_embarked embarked_0.0 embarked_C embarked_Q embarked_S frequency_encoded_embarked mean_encoded_embarked woe_encoded_embarked\n",
" 0 1 True Allen, Miss. Elisabeth Walton female 29 0 0 24160 211.338 B5 S 2 nan St Louis, MO 1 0 0 0 1 0.698243 0.337472 -0.696431\n",
" 1 1 True Allison, Master. Hudson Trevor male 0.9167 1 2 113781 151.55 C22 C26 S 11 nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431\n",
" 2 1 False Allison, Miss. Helen Loraine female 2 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431\n",
" 3 1 False Allison, Mr. Hudson Joshua Creighton male 30 1 2 113781 151.55 C22 C26 S None 135 Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431\n",
" 4 1 False Allison, Mrs. Hudson J C (Bessie Waldo Daniels) female 25 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"label_encoder = vaex.ml.LabelEncoder(features=['embarked'])\n",
"one_hot_encoder = vaex.ml.OneHotEncoder(features=['embarked'])\n",
"freq_encoder = vaex.ml.FrequencyEncoder(features=['embarked'])\n",
"bayes_encoder = vaex.ml.BayesianTargetEncoder(features=['embarked'], target='survived')\n",
"woe_encoder = vaex.ml.WeightOfEvidenceEncoder(features=['embarked'], target='survived')\n",
"\n",
"df = label_encoder.fit_transform(df)\n",
"df = one_hot_encoder.fit_transform(df)\n",
"df = freq_encoder.fit_transform(df)\n",
"df = bayes_encoder.fit_transform(df)\n",
"df = woe_encoder.fit_transform(df)\n",
"\n",
"df.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2020-01-02T13:09:43.742926Z",
"start_time": "2020-01-02T13:09:43.676031Z"
}
},
"source": [
"Notice that the transformed features are all included in the resulting DataFrame and are appropriately named. This is excellent for the construction of various diagnostic plots, and engineering of more complex features. The fact that the resulting (encoded) features take no memory, allows one to try out or combine a variety of preprocessing steps without spending any extra memory. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Feature Engineering\n",
"\n",
"### KBinsDiscretizer\n",
"\n",
"With the `KBinsDiscretizer` you can convert a continous into a discrete feature by binning the data into specified intervals. You can specify the number of bins, the strategy on how to determine their size:\n",
"\n",
"* \"uniform\" - all bins have equal sizes;\n",
"* \"quantile\" - all bins have (approximately) the same number of samples in them;\n",
"* \"kmeans\" - values in each bin belong to the same 1D cluster as determined by the `KMeans` algorithm."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:38.037547Z",
"start_time": "2020-07-14T15:58:38.015096Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
pclass
survived
name
sex
age
sibsp
parch
ticket
fare
cabin
embarked
boat
body
home_dest
label_encoded_embarked
embarked_0.0
embarked_C
embarked_Q
embarked_S
frequency_encoded_embarked
mean_encoded_embarked
woe_encoded_embarked
binned_age
\n",
"\n",
"\n",
"
0
1
True
Allen, Miss. Elisabeth Walton
female
29
0
0
24160
211.338
B5
S
2
nan
St Louis, MO
1
0
0
0
1
0.698243
0.337472
-0.696431
2
\n",
"
1
1
True
Allison, Master. Hudson Trevor
male
0.9167
1
2
113781
151.55
C22 C26
S
11
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
0
\n",
"
2
1
False
Allison, Miss. Helen Loraine
female
2
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
0
\n",
"
3
1
False
Allison, Mr. Hudson Joshua Creighton
male
30
1
2
113781
151.55
C22 C26
S
None
135
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
2
\n",
"
4
1
False
Allison, Mrs. Hudson J C (Bessie Waldo Daniels)
female
25
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
2
\n",
"\n",
"
"
],
"text/plain": [
" # pclass survived name sex age sibsp parch ticket fare cabin embarked boat body home_dest label_encoded_embarked embarked_0.0 embarked_C embarked_Q embarked_S frequency_encoded_embarked mean_encoded_embarked woe_encoded_embarked binned_age\n",
" 0 1 True Allen, Miss. Elisabeth Walton female 29 0 0 24160 211.338 B5 S 2 nan St Louis, MO 1 0 0 0 1 0.698243 0.337472 -0.696431 2\n",
" 1 1 True Allison, Master. Hudson Trevor male 0.9167 1 2 113781 151.55 C22 C26 S 11 nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 0\n",
" 2 1 False Allison, Miss. Helen Loraine female 2 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 0\n",
" 3 1 False Allison, Mr. Hudson Joshua Creighton male 30 1 2 113781 151.55 C22 C26 S None 135 Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 2\n",
" 4 1 False Allison, Mrs. Hudson J C (Bessie Waldo Daniels) female 25 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 2"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kbdisc = vaex.ml.KBinsDiscretizer(features=['age'], n_bins=5, strategy='quantile')\n",
"df = kbdisc.fit_transform(df)\n",
"df.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### GroupBy Transformer\n",
"\n",
"The `GroupByTransformer` is a handy feature in `vaex-ml` that lets you perform a groupby aggregations on the training data, and then use those aggregations as features in the training and test sets."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:39.124527Z",
"start_time": "2020-07-14T15:58:39.088923Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
pclass
survived
name
sex
age
sibsp
parch
ticket
fare
cabin
embarked
boat
body
home_dest
label_encoded_embarked
embarked_0.0
embarked_C
embarked_Q
embarked_S
frequency_encoded_embarked
mean_encoded_embarked
woe_encoded_embarked
binned_age
age_mean
age_std
fare_mean
fare_std
\n",
"\n",
"\n",
"
0
1
True
Allen, Miss. Elisabeth Walton
female
29
0
0
24160
211.338
B5
S
2
nan
St Louis, MO
1
0
0
0
1
0.698243
0.337472
-0.696431
2
39.1599
14.5224
87.509
80.3226
\n",
"
1
1
True
Allison, Master. Hudson Trevor
male
0.9167
1
2
113781
151.55
C22 C26
S
11
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
0
39.1599
14.5224
87.509
80.3226
\n",
"
2
1
False
Allison, Miss. Helen Loraine
female
2
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
0
39.1599
14.5224
87.509
80.3226
\n",
"
3
1
False
Allison, Mr. Hudson Joshua Creighton
male
30
1
2
113781
151.55
C22 C26
S
None
135
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
2
39.1599
14.5224
87.509
80.3226
\n",
"
4
1
False
Allison, Mrs. Hudson J C (Bessie Waldo Daniels)
female
25
1
2
113781
151.55
C22 C26
S
None
nan
Montreal, PQ / Chesterville, ON
1
0
0
0
1
0.698243
0.337472
-0.696431
2
39.1599
14.5224
87.509
80.3226
\n",
"\n",
"
"
],
"text/plain": [
" # pclass survived name sex age sibsp parch ticket fare cabin embarked boat body home_dest label_encoded_embarked embarked_0.0 embarked_C embarked_Q embarked_S frequency_encoded_embarked mean_encoded_embarked woe_encoded_embarked binned_age age_mean age_std fare_mean fare_std\n",
" 0 1 True Allen, Miss. Elisabeth Walton female 29 0 0 24160 211.338 B5 S 2 nan St Louis, MO 1 0 0 0 1 0.698243 0.337472 -0.696431 2 39.1599 14.5224 87.509 80.3226\n",
" 1 1 True Allison, Master. Hudson Trevor male 0.9167 1 2 113781 151.55 C22 C26 S 11 nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 0 39.1599 14.5224 87.509 80.3226\n",
" 2 1 False Allison, Miss. Helen Loraine female 2 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 0 39.1599 14.5224 87.509 80.3226\n",
" 3 1 False Allison, Mr. Hudson Joshua Creighton male 30 1 2 113781 151.55 C22 C26 S None 135 Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 2 39.1599 14.5224 87.509 80.3226\n",
" 4 1 False Allison, Mrs. Hudson J C (Bessie Waldo Daniels) female 25 1 2 113781 151.55 C22 C26 S None nan Montreal, PQ / Chesterville, ON 1 0 0 0 1 0.698243 0.337472 -0.696431 2 39.1599 14.5224 87.509 80.3226"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gbt = vaex.ml.GroupByTransformer(by='pclass', agg={'age': ['mean', 'std'],\n",
" 'fare': ['mean', 'std'],\n",
" })\n",
"df = gbt.fit_transform(df)\n",
"df.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dimensionality reduction \n",
"\n",
"### Principal Component Analysis\n",
"\n",
"The [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) implemented in `vaex.ml` can scale to a very large number of samples, even if that data we want to transform does not fit into RAM. To demonstrate this, let us do a PCA transformation on the Iris dataset. For this example, we have replicated this dataset thousands of times, such that it contains over **1 billion** samples."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:40.059220Z",
"start_time": "2020-07-14T15:58:40.054628Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of samples in DataFrame: 1,005,000,000\n"
]
}
],
"source": [
"df = vaex.ml.datasets.load_iris_1e9()\n",
"n_samples = len(df)\n",
"print(f'Number of samples in DataFrame: {n_samples:,}')"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:48.183478Z",
"start_time": "2020-07-14T15:58:41.911503Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[#######################################-] 100.00% elapsed time : 2.73s = 0.0m = 0.0h\n",
"[########################################] 100.00% elapsed time : 3.53s = 0.1m = 0.0h \n",
" "
]
}
],
"source": [
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"pca = vaex.ml.PCA(features=features, n_components=4, progress=True)\n",
"pca.fit(df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The PCA transformer implemented in `vaex.ml` can be fit in well under a minute, even when the data comprises 4 columns and 1 billion rows. "
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:53.502282Z",
"start_time": "2020-07-14T15:58:53.487728Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"data": {
"text/html": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"fig = plt.figure(figsize=(12, 5))\n",
"\n",
"plt.subplot(121)\n",
"df_trans.scatter(df_trans.petal_length, df_trans.petal_width, c_expr=df_trans.class_)\n",
"plt.title('Original classes')\n",
"\n",
"plt.subplot(122)\n",
"df_trans.scatter(df_trans.petal_length, df_trans.petal_width, c_expr=df_trans.predicted_kmean_map)\n",
"plt.title('Predicted classes')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As with any algorithm implemented in `vaex.ml`, K-Means can be used on billions of samples. Fitting takes **under 2 minutes** when applied on the oversampled Iris dataset, numbering over **1 billion** samples."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:58:58.284463Z",
"start_time": "2020-07-14T15:58:58.280028Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of samples in DataFrame: 1,005,000,000\n"
]
}
],
"source": [
"df = vaex.ml.datasets.load_iris_1e9()\n",
"n_samples = len(df)\n",
"print(f'Number of samples in DataFrame: {n_samples:,}')"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:59:20.061389Z",
"start_time": "2020-07-14T15:58:58.855735Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iteration 0, inertia 1272768079.8016784\n",
"Iteration 1, inertia 818533259.4873675\n",
"Iteration 2, inertia 813586111.1870002\n",
"Iteration 3, inertia 812725317.4234301\n",
"Iteration 4, inertia 812680582.0575261\n",
"Iteration 5, inertia 812680582.057527\n",
"CPU times: user 2min 46s, sys: 1.14 s, total: 2min 48s\n",
"Wall time: 21.2 s\n"
]
}
],
"source": [
"%%time\n",
"\n",
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"kmeans = vaex.ml.cluster.KMeans(features=features, n_clusters=3, max_iter=100, verbose=True, random_state=31)\n",
"kmeans.fit(df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Supervised learning\n",
"\n",
"While `vaex.ml` does not yet implement any supervised machine learning models, it does provide wrappers to several popular libraries such as [scikit-learn](https://scikit-learn.org/), [XGBoost](https://xgboost.readthedocs.io/), [LightGBM](https://lightgbm.readthedocs.io/) and [CatBoost](https://catboost.ai/). \n",
"\n",
"The main benefit of these wrappers is that they turn the models into `vaex.ml` transformers. This means the models become part of the DataFrame _state_ and thus can be serialized, and their predictions can be returned as _virtual columns_. This is especially useful for creating various diagnostic plots and evaluating performance metrics at no memory cost, as well as building ensembles. \n",
"\n",
"### `Scikit-Learn` example\n",
"\n",
"The `vaex.ml.sklearn` module provides convenient wrappers to the `scikit-learn` estimators. In fact, these wrappers can be used with any library that follows the API convention established by `scikit-learn`, i.e. implements the `.fit` and `.transform` methods.\n",
"\n",
"Here is an example:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T15:59:30.707188Z",
"start_time": "2020-07-14T15:59:30.385719Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
sepal_length
sepal_width
petal_length
petal_width
class_
prediction
\n",
"\n",
"\n",
"
0
5.9
3.0
4.2
1.5
1
1
\n",
"
1
6.1
3.0
4.6
1.4
1
1
\n",
"
2
6.6
2.9
4.6
1.3
1
1
\n",
"
3
6.7
3.3
5.7
2.1
2
2
\n",
"
4
5.5
4.2
1.4
0.2
0
0
\n",
"
...
...
...
...
...
...
...
\n",
"
145
5.2
3.4
1.4
0.2
0
0
\n",
"
146
5.1
3.8
1.6
0.2
0
0
\n",
"
147
5.8
2.6
4.0
1.2
1
1
\n",
"
148
5.7
3.8
1.7
0.3
0
0
\n",
"
149
6.2
2.9
4.3
1.3
1
1
\n",
"\n",
"
"
],
"text/plain": [
"# sepal_length sepal_width petal_length petal_width class_ prediction\n",
"0 5.9 3.0 4.2 1.5 1 1\n",
"1 6.1 3.0 4.6 1.4 1 1\n",
"2 6.6 2.9 4.6 1.3 1 1\n",
"3 6.7 3.3 5.7 2.1 2 2\n",
"4 5.5 4.2 1.4 0.2 0 0\n",
"... ... ... ... ... ... ...\n",
"145 5.2 3.4 1.4 0.2 0 0\n",
"146 5.1 3.8 1.6 0.2 0 0\n",
"147 5.8 2.6 4.0 1.2 1 1\n",
"148 5.7 3.8 1.7 0.3 0 0\n",
"149 6.2 2.9 4.3 1.3 1 1"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from vaex.ml.sklearn import Predictor\n",
"from sklearn.ensemble import GradientBoostingClassifier\n",
"\n",
"df = vaex.ml.datasets.load_iris()\n",
"\n",
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"target = 'class_'\n",
"\n",
"model = GradientBoostingClassifier(random_state=42)\n",
"vaex_model = Predictor(features=features, target=target, model=model, prediction_name='prediction')\n",
"\n",
"vaex_model.fit(df=df)\n",
"\n",
"df = vaex_model.transform(df)\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One can still train a predictive model on datasets that are too big to fit into memory by leveraging the on-line learners provided by `scikit-learn`. The `vaex.ml.sklearn.IncrementalPredictor` conveniently wraps these learners and provides control on how the data is passed to them from a `vaex` DataFrame. \n",
"\n",
"Let us train a model on the oversampled Iris dataset which comprises over 1 billion samples."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T16:08:08.898670Z",
"start_time": "2020-07-14T15:59:33.194286Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3461e550b491449c8dc56690f61d34d6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=0.0, max=1.0), Label(value='In progress...')))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
sepal_length
sepal_width
petal_length
petal_width
class_
prediction
\n",
"\n",
"\n",
"
0
5.9
3.0
4.2
1.5
0
0
\n",
"
1
6.1
3.0
4.6
1.4
0
0
\n",
"
2
6.6
2.9
4.6
1.3
0
0
\n",
"
3
6.7
3.3
5.7
2.1
0
0
\n",
"
4
5.5
4.2
1.4
0.2
0
0
\n",
"
...
...
...
...
...
...
...
\n",
"
1,004,999,995
5.2
3.4
1.4
0.0
0
0
\n",
"
1,004,999,996
5.1
3.8
1.6
0.0
0
0
\n",
"
1,004,999,997
5.8
2.6
4.0
0.0
0
0
\n",
"
1,004,999,998
5.7
3.8
1.7
0.0
0
0
\n",
"
1,004,999,999
6.2
2.9
4.3
0.0
0
0
\n",
"\n",
"
"
],
"text/plain": [
"# sepal_length sepal_width petal_length petal_width class_ prediction\n",
"0 5.9 3.0 4.2 1.5 0 0\n",
"1 6.1 3.0 4.6 1.4 0 0\n",
"2 6.6 2.9 4.6 1.3 0 0\n",
"3 6.7 3.3 5.7 2.1 0 0\n",
"4 5.5 4.2 1.4 0.2 0 0\n",
"... ... ... ... ... ... ...\n",
"1,004,999,995 5.2 3.4 1.4 0.0 0 0\n",
"1,004,999,996 5.1 3.8 1.6 0.0 0 0\n",
"1,004,999,997 5.8 2.6 4.0 0.0 0 0\n",
"1,004,999,998 5.7 3.8 1.7 0.0 0 0\n",
"1,004,999,999 6.2 2.9 4.3 0.0 0 0"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from vaex.ml.sklearn import IncrementalPredictor\n",
"from sklearn.linear_model import SGDClassifier\n",
"\n",
"df = vaex.ml.datasets.load_iris_1e9()\n",
"\n",
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"target = 'class_'\n",
"\n",
"model = SGDClassifier(learning_rate='constant', eta0=0.0001, random_state=42)\n",
"vaex_model = IncrementalPredictor(features=features, target=target, model=model, \n",
" batch_size=11_000_000, partial_fit_kwargs={'classes':[0, 1, 2]})\n",
"\n",
"vaex_model.fit(df=df, progress='widget')\n",
"\n",
"df = vaex_model.transform(df)\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### `XGBoost` example\n",
"\n",
"Libraries such as `XGBoost` provide more options such as validation during training and early stopping for example. We provide wrappers that keeps close to the native API of these libraries, in addition to the `scikit-learn` API. \n",
"\n",
"While the following example showcases the `XGBoost` wrapper, `vaex.ml` implements similar wrappers for `LightGBM` and `CatBoost`."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T16:08:44.463784Z",
"start_time": "2020-07-14T16:08:43.893355Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
sepal_length
sepal_width
petal_length
petal_width
class_
xgboost_prediction
\n",
"\n",
"\n",
"
0
5.9
3.0
4.2
1.5
1
1.0
\n",
"
1
6.1
3.0
4.6
1.4
1
1.0
\n",
"
2
6.6
2.9
4.6
1.3
1
1.0
\n",
"
3
6.7
3.3
5.7
2.1
2
2.0
\n",
"
4
5.5
4.2
1.4
0.2
0
0.0
\n",
"
...
...
...
...
...
...
...
\n",
"
80,395
5.2
3.4
1.4
0.2
0
0.0
\n",
"
80,396
5.1
3.8
1.6
0.2
0
0.0
\n",
"
80,397
5.8
2.6
4.0
1.2
1
1.0
\n",
"
80,398
5.7
3.8
1.7
0.3
0
0.0
\n",
"
80,399
6.2
2.9
4.3
1.3
1
1.0
\n",
"\n",
"
"
],
"text/plain": [
"# sepal_length sepal_width petal_length petal_width class_ xgboost_prediction\n",
"0 5.9 3.0 4.2 1.5 1 1.0\n",
"1 6.1 3.0 4.6 1.4 1 1.0\n",
"2 6.6 2.9 4.6 1.3 1 1.0\n",
"3 6.7 3.3 5.7 2.1 2 2.0\n",
"4 5.5 4.2 1.4 0.2 0 0.0\n",
"... ... ... ... ... ... ...\n",
"80,395 5.2 3.4 1.4 0.2 0 0.0\n",
"80,396 5.1 3.8 1.6 0.2 0 0.0\n",
"80,397 5.8 2.6 4.0 1.2 1 1.0\n",
"80,398 5.7 3.8 1.7 0.3 0 0.0\n",
"80,399 6.2 2.9 4.3 1.3 1 1.0"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from vaex.ml.xgboost import XGBoostModel\n",
"\n",
"df = vaex.ml.datasets.load_iris_1e5()\n",
"df_train, df_test = df.ml.train_test_split(test_size=0.2, verbose=False)\n",
"\n",
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"target = 'class_'\n",
"\n",
"params = {'learning_rate': 0.1,\n",
" 'max_depth': 3, \n",
" 'num_class': 3, \n",
" 'objective': 'multi:softmax',\n",
" 'subsample': 1,\n",
" 'random_state': 42,\n",
" 'n_jobs': -1}\n",
"\n",
"\n",
"booster = XGBoostModel(features=features, target=target, num_boost_round=500, params=params)\n",
"booster.fit(df=df_train, evals=[(df_train, 'train'), (df_test, 'test')], early_stopping_rounds=5)\n",
"\n",
"df_test = booster.transform(df_train)\n",
"df_test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### `CatBoost` example\n",
"\n",
"The CatBoost library supports summing up models. With this feature, we can use CatBoost to train a model using data that is otherwise too large to fit in memory. The idea is to train a single CatBoost model per chunk of data, and than sum up the invidiual models to create a master model. To use this feature via `vaex.ml` just specify the `batch_size` argument in the `CatBoostModel` wrapper. One can also specify additional options such as the strategy on how to sum up the individual models, or how they should be weighted."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T16:09:54.623370Z",
"start_time": "2020-07-14T16:08:46.494467Z"
},
"tags": [
"skip-ci"
]
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f92026988f7d46789de0f3a95257dc86",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=0.0, max=1.0), Label(value='In progress...')))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"
\n",
"\n",
"
#
sepal_length
sepal_width
petal_length
petal_width
class_
catboost_prediction
\n",
"\n",
"\n",
"
0
5.9
3.0
4.2
1.5
1
[1]
\n",
"
1
6.1
3.0
4.6
1.4
1
[1]
\n",
"
2
6.6
2.9
4.6
1.3
1
[1]
\n",
"
3
6.7
3.3
5.7
2.1
2
[2]
\n",
"
4
5.5
4.2
1.4
0.2
0
[0]
\n",
"
...
...
...
...
...
...
...
\n",
"
80,399,995
5.2
3.4
1.4
0.2
0
[0]
\n",
"
80,399,996
5.1
3.8
1.6
0.2
0
[0]
\n",
"
80,399,997
5.8
2.6
4.0
1.2
1
[1]
\n",
"
80,399,998
5.7
3.8
1.7
0.3
0
[0]
\n",
"
80,399,999
6.2
2.9
4.3
1.3
1
[1]
\n",
"\n",
"
"
],
"text/plain": [
"# sepal_length sepal_width petal_length petal_width class_ catboost_prediction\n",
"0 5.9 3.0 4.2 1.5 1 [1]\n",
"1 6.1 3.0 4.6 1.4 1 [1]\n",
"2 6.6 2.9 4.6 1.3 1 [1]\n",
"3 6.7 3.3 5.7 2.1 2 [2]\n",
"4 5.5 4.2 1.4 0.2 0 [0]\n",
"... ... ... ... ... ... ...\n",
"80,399,995 5.2 3.4 1.4 0.2 0 [0]\n",
"80,399,996 5.1 3.8 1.6 0.2 0 [0]\n",
"80,399,997 5.8 2.6 4.0 1.2 1 [1]\n",
"80,399,998 5.7 3.8 1.7 0.3 0 [0]\n",
"80,399,999 6.2 2.9 4.3 1.3 1 [1]"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from vaex.ml.catboost import CatBoostModel\n",
"\n",
"df = vaex.ml.datasets.load_iris_1e8()\n",
"df_train, df_test = df.ml.train_test_split(test_size=0.2, verbose=False)\n",
"\n",
"features = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width']\n",
"target = 'class_'\n",
"\n",
"params = {\n",
" 'leaf_estimation_method': 'Gradient',\n",
" 'learning_rate': 0.1,\n",
" 'max_depth': 3,\n",
" 'bootstrap_type': 'Bernoulli',\n",
" 'subsample': 0.8,\n",
" 'sampling_frequency': 'PerTree',\n",
" 'colsample_bylevel': 0.8,\n",
" 'reg_lambda': 1,\n",
" 'objective': 'MultiClass',\n",
" 'eval_metric': 'MultiClass',\n",
" 'random_state': 42,\n",
" 'verbose': 0,\n",
"}\n",
"\n",
"booster = CatBoostModel(features=features, target=target, num_boost_round=23, \n",
" params=params, prediction_type='Class', batch_size=11_000_000)\n",
"booster.fit(df=df_train, progress='widget')\n",
"\n",
"df_test = booster.transform(df_train)\n",
"df_test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## State transfer - pipelines made easy\n",
"\n",
"Each `vaex` DataFrame consists of two parts: _data_ and _state_. The _data_ is immutable, and any operation such as filtering, adding new columns, or applying transformers or predictive models just modifies the _state_. This is extremely powerful concept and can completely redefine how we imagine machine learning pipelines. \n",
"\n",
"As an example, let us once again create a model based on the Iris dataset. Here, we will create a couple of new features, do a PCA transformation, and finally train a predictive model. "
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"ExecuteTime": {
"end_time": "2020-07-14T16:10:19.919524Z",
"start_time": "2020-07-14T16:10:19.873625Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"