'Sklearn-GMM on large datasets
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set.
Can I use GMM.fit()
(sklearn.mixture.GMM
) repeatedly on mini batch of data ??
Solution 1:[1]
There is no reason to fit it repeatedly. Just randomly sample as many data points as you think your machine can compute in a reasonable time. If variation is not very high, the random sample will have approximately the same distribution as the full dataset.
randomly_sampled = np.random.choice(full_dataset, size=10000, replace=False)
#If data does not fit in memory you can find a way to randomly sample when you read it
GMM.fit(randomly_sampled)
And the use
GMM.predict(full_dataset)
# Again you can fit one by one or batch by batch if you cannot read it in memory
on the rest to classify them.
Solution 2:[2]
fit
will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit
function. Unfortunately, GMM
doesn't have a partial_fit
(yet), so you can't do that.
Solution 3:[3]
I think you can set the init_para
to empty string ''
when you create the GMM
object, then you might be able to train the whole data set.
Solution 4:[4]
As Andreas Mueller mentioned, GMM doesn't have partial_fit
yet which will allow you to train the model in an iterative fashion. But you can make use of warm_start
by setting it's value to True
when you create the GMM object. This allows you to iterate over batches of data and continue training the model from where you left it in the last iteration.
Hope this helps!
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Gioelelm |
Solution 2 | Andreas Mueller |
Solution 3 | |
Solution 4 | Parthasarathy Subburaj |