Are you wondering when to use gaussian mixture models for clustering? Well then you are in the right place! In this article we tell you everything you need to know to understand when to use gaussian mixture models.
We start out by talking about what types of datasets gaussian mixture models are generally used for. Next, we lay out some of the main advantages and disadvantages you should keep in mind when deciding whether to use gaussian mixture models for clustering. Finally, we provide specific examples of situations where you should and should not use gaussian mixture models for clustering.
Data for gaussian mixture models
What kind of datasets can you use gaussian mixture models on? In general, gaussian mixture models should be used for datasets that have multiple interesting features but no specific outcome variable you want to predict. Specifically, gaussian mixture models perform best on datasets where all the features are normally distributed. That means that gaussian mixture models should generally be used when your features are numeric.
Advantages and disadvantages of Gaussian mature models
What are some of the main advantages of and disadvantages of clustering with gaussian mixture models? Here are some of the main advantages and disadvantages you should keep in mind when deciding whether to use gaussian mixture models.
Advantages of Gaussian mixture models
- Probabilistic estimates of belonging to each cluster. One of the main advantages of gaussian mixture models is that they provide estimates of the probability that each data point belongs to each cluster. This provides a lot more contextual information than the standalone cluster assignment that most other clustering algorithms provide. These probability estimates can be very useful when examining ambiguous data points that fall at the border of two clusters.
- Does not assume spherical clusters. Another advantage that gaussian mixture models have over other models like k-means clustering is that they do not assume that all clusters are uniformly shaped spheres. Instead, gaussian mixture models can be used to accommodate clusters of varying shapes (so long as they are roughly elliptical).
- Handles clusters of differing sizes. In addition to being able to accommodate clusters of varying shapes, gaussian mixture models can also be used to accommodate clusters of varying sizes. This provides even more flexibility in the types of clusters that can be handled.
- Less sensitive to scale. Gaussian mixture models are generally less sensitive to scale than other clustering algorithms. That means that you may not need to rescale your variables before using them for clustering.
Disadvantages of Gaussian mixture models
- Difficult to incorporate categorical features. One of the main disadvantages of clustering with gaussian mixture models is that it is difficult to incorporate categorical variables. Gaussian mixture models operate under the assumption that all of your features are normally distributed, so they are not easily adapted to categorical data.
- Assumes a normal distribution for features. In addition to being struggling with categorical features, gaussian mixture models may also struggle with numeric variables that are not normally distributed. This means that you should take some time to look at the distributions of your features before reaching for this clustering algorithm.
- Make some assumptions about cluster shape. While gaussian mixture models are able to handle clusters of varying shapes and sizes, they do make some assumptions about the shape of the clusters. Specifically, the clusters are assumed to be elliptic. This means that gaussian mixture models will not perform as well in cases where clusters are very irregularly shaped.
- Needs sufficient data for each cluster. Since you need to estimate a covariance matrix in order to use gaussian mixture models, you should make sure that you have enough data points in each cluster to adequately estimate the covariance. The amount of data required is not huge, but it is larger than simple algorithms that do not estimate a covariance matrix.
- Need to specify number of clusters. Another disadvantage of gaussian mixture models is that you need to specify the number of clusters you want to use in your analysis ahead of time. This can be a non-trivial task when you do not have intuition about the number of clusters there should be.
- Somewhat sensitive towards outliers. Since gaussian mixture models operate under the assumption that your features are normally distributed, they can be thrown off by cases where there are many outliers in the data. That being said, some implementations of gaussian mixture models allow for outliers to be separated out into a separate cluster.
- Somewhat sensitive to initialization conditions. Gaussian mixture models are somewhat sensitive to initialization conditions of the algorithm such as the seed that is used and the starting points that are used for cluster centers. This means you may get different results if you run the algorithm multiple times.
- Somewhat slow. One final disadvantage of gaussian mixture models is that they tend to be slower than similar clustering algorithms like k-means clustering. This is especially true when there are many features in your dataset.
When to use Gaussian mixture models
When should you use gaussian mixture models for clustering? Here are some examples of scenarios that are particularly well suited to gaussian mixture models.
- Clusters are not fully separated. If you have reason to believe that your clusters are not fully separated, gaussian mixture models are a great choice. Rather than laying down a hard ruling that a given point belongs to a specific cluster, gaussian mixture models provide more contextual information about the probability that each point belongs to each cluster. These probabilities are useful in ambiguous cases where a point lies at the intersection of two clusters.
- Need a probability that each point belongs to each cluster. More generally, clustering with gaussian mixture models is a great option for cases where you need an estimate of the probability that a point belongs to each cluster. For example, if you were specifically looking for hybrid observations that shared some characteristics of a few different clusters, the probability scores provided by gaussian mixture models would give you a way to identify these observations.
- You do not want to rescale your data. Most clustering algorithms are relatively sensitive to scale and are thrown off in cases where different features are on different scales. Gaussian mixture models, however, are not so sensitive to scale. That means that they are great for cases where your features are on different scales and you do not want to rescale them.
When not to use Gaussian mixture models
When should you avoid using gaussian mixture models? Here are some examples of situations where you should avoid using gaussian mixture models.
- Categorical and non-normal features. Clustering with gaussian mixture models works best when your features are roughly normally distributed. That means that it does not perform as well for datasets with many categorical features. If you have a mixture of numeric and categorical features in your data, you may be better off using hierarchical clustering or an extension of k-means clustering like k-prototypes.
- Irregularly shaped clusters. While gaussian mixture models can handle clusters of varying shapes and sizes, they only work well in cases where your clusters are roughly elliptic. If you have reason to believe that your clusters are very irregularly shaped, you should use a more flexible model like DBSCAN that does not make any assumptions about cluster shape.
- When to use hierarchical clustering
- When to use k-means clustering
- When to use DBSCAN
- When to use spectral clustering
Are you trying to figure out which machine learning model is best for your next data science project? Check out our comprehensive guide on how to choose the right machine learning model.