Mixture modeling with compact support distributions for unsupervised learning

Ambedkar Dukkipati, Debarghya Ghoshdastidar, Jinu Krishnan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

The importance of the q-Gaussian distributions is attributed to their power law nature and the fact that they generalize the Gaussian distributions (q → 1 retrieves the Gaussian distributions). While for q > 1, a q-Gaussian distribution is nothing but a Student's t-distribution, which is a long tailed distribution, for q < 1 it is a distribution with a compact support. Though mixture modeling with t-distributions has been studied, mixture modeling with compact support distributions has not been explored in the literature. The main aim of this paper is to study mixture modeling using q-Gaussian distributions that have a compact support. We study estimation of the parameters of this model using Maximum Likelihood Estimator (MLE) via Expectation Maximization (EM) algorithm. We further study applications of these compact support distributions to clustering and anomaly detection. As far as our knowledge, this is the first work that studies compact support distributions in statistical modeling for unsupervised learning problems.

Original languageEnglish
Title of host publication2016 International Joint Conference on Neural Networks, IJCNN 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2706-2713
Number of pages8
ISBN (Electronic)9781509006199
DOIs
StatePublished - 31 Oct 2016
Externally publishedYes
Event2016 International Joint Conference on Neural Networks, IJCNN 2016 - Vancouver, Canada
Duration: 24 Jul 201629 Jul 2016

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2016-October

Conference

Conference2016 International Joint Conference on Neural Networks, IJCNN 2016
Country/TerritoryCanada
CityVancouver
Period24/07/1629/07/16

Fingerprint

Dive into the research topics of 'Mixture modeling with compact support distributions for unsupervised learning'. Together they form a unique fingerprint.

Cite this