Measuring robustness of feature selection techniques on software engineering datasets

H Wang, TM Khoshgoftaar… - 2011 IEEE International …, 2011 - ieeexplore.ieee.org
2011 IEEE International Conference on Information Reuse & Integration, 2011ieeexplore.ieee.org
Feature Selection is a process which identifies irrelevant and redundant features from a high-
dimensional dataset (that is, a dataset with many features), and removes these before further
analysis is performed. Recently, the robustness (eg, stability) of feature selection techniques
has been studied, to examine the sensitivity of these techniques to changes in their input
data. In this study, we investigate the robustness of six commonly used feature selection
techniques as the magnitude of change to the datasets and the size of the selected feature …
Feature Selection is a process which identifies irrelevant and redundant features from a high-dimensional dataset (that is, a dataset with many features), and removes these before further analysis is performed. Recently, the robustness (e.g., stability) of feature selection techniques has been studied, to examine the sensitivity of these techniques to changes in their input data. In this study, we investigate the robustness of six commonly used feature selection techniques as the magnitude of change to the datasets and the size of the selected feature subsets are varied. All experiments were conducted on 16 datasets from three real-world software projects. The experimental results demonstrate that Gain Ratio shows the least stability on average while two different versions of ReliefF show the most stability. Results also show that making smaller changes to the datasets has less impact on the stability of feature ranking techniques applied to those datasets.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References