Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE

Research output: Contribution to journalArticle

  • 2 Citations

Abstract

Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. Such techniques, called oversamplers, modify the training data, allowing any classifier to be used with class-imbalanced datasets. Many algorithms have been proposed for this task, but most are complex and tend to generate unnecessary noise. This work presents a simple and effective oversampling method based on k-means clustering and SMOTE (synthetic minority oversampling technique), which avoids the generation of noise and effectively overcomes imbalances between and within classes. Empirical results of extensive experiments with 90 datasets show that training data oversampled with the proposed method improves classification results. Moreover, k-means SMOTE consistently outperforms other popular oversampling methods. An implementation1 is made available in the Python programming language.

LanguageEnglish
Pages1-20
Number of pages20
JournalInformation Sciences
Volume465
DOIs
StatePublished - 1 Oct 2018

Fingerprint

Oversampling
Heuristic methods
K-means
Heuristics
Classification Algorithm
Supervised learning
Computer programming languages
Classifiers
Python
K-means Clustering
Supervised Learning
Programming Languages
Continue
Classifier
Class
Learning
Minorities
Tend
Experiments
Experiment

Keywords

  • Class-imbalanced learning
  • Classification
  • Clustering
  • Oversampling
  • Supervised learning
  • Within-class imbalance

Cite this

@article{167acd69964e486cba7cc255bf7ed321,
title = "Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE",
abstract = "Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. Such techniques, called oversamplers, modify the training data, allowing any classifier to be used with class-imbalanced datasets. Many algorithms have been proposed for this task, but most are complex and tend to generate unnecessary noise. This work presents a simple and effective oversampling method based on k-means clustering and SMOTE (synthetic minority oversampling technique), which avoids the generation of noise and effectively overcomes imbalances between and within classes. Empirical results of extensive experiments with 90 datasets show that training data oversampled with the proposed method improves classification results. Moreover, k-means SMOTE consistently outperforms other popular oversampling methods. An implementation1 is made available in the Python programming language.",
keywords = "Class-imbalanced learning, Classification, Clustering, Oversampling, Supervised learning, Within-class imbalance",
author = "Georgios Douzas and Fernando Ba{\cc}{\~a}o and Felix Last",
note = "Douzas, G., Ba{\cc}{\~a}o, F., & Last, F. (2018). Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Information Sciences, 465, 1-20. DOI: 10.1016/j.ins.2018.06.056",
year = "2018",
month = "10",
day = "1",
doi = "10.1016/j.ins.2018.06.056",
language = "English",
volume = "465",
pages = "1--20",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier Science B.V., Amsterdam.",

}

TY - JOUR

T1 - Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE

AU - Douzas,Georgios

AU - Bação,Fernando

AU - Last,Felix

N1 - Douzas, G., Bação, F., & Last, F. (2018). Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE. Information Sciences, 465, 1-20. DOI: 10.1016/j.ins.2018.06.056

PY - 2018/10/1

Y1 - 2018/10/1

N2 - Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. Such techniques, called oversamplers, modify the training data, allowing any classifier to be used with class-imbalanced datasets. Many algorithms have been proposed for this task, but most are complex and tend to generate unnecessary noise. This work presents a simple and effective oversampling method based on k-means clustering and SMOTE (synthetic minority oversampling technique), which avoids the generation of noise and effectively overcomes imbalances between and within classes. Empirical results of extensive experiments with 90 datasets show that training data oversampled with the proposed method improves classification results. Moreover, k-means SMOTE consistently outperforms other popular oversampling methods. An implementation1 is made available in the Python programming language.

AB - Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. Such techniques, called oversamplers, modify the training data, allowing any classifier to be used with class-imbalanced datasets. Many algorithms have been proposed for this task, but most are complex and tend to generate unnecessary noise. This work presents a simple and effective oversampling method based on k-means clustering and SMOTE (synthetic minority oversampling technique), which avoids the generation of noise and effectively overcomes imbalances between and within classes. Empirical results of extensive experiments with 90 datasets show that training data oversampled with the proposed method improves classification results. Moreover, k-means SMOTE consistently outperforms other popular oversampling methods. An implementation1 is made available in the Python programming language.

KW - Class-imbalanced learning

KW - Classification

KW - Clustering

KW - Oversampling

KW - Supervised learning

KW - Within-class imbalance

UR - http://www.scopus.com/inward/record.url?scp=85049450664&partnerID=8YFLogxK

UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=WOS_CPL&DestLinkType=FullRecord&UT=WOS:000445713900001

U2 - 10.1016/j.ins.2018.06.056

DO - 10.1016/j.ins.2018.06.056

M3 - Article

VL - 465

SP - 1

EP - 20

JO - Information Sciences

T2 - Information Sciences

JF - Information Sciences

SN - 0020-0255

ER -