Test-cost-sensitive attribute reduction

F Min, H He, Y Qian, W Zhu - Information Sciences, 2011 - Elsevier
In many data mining and machine learning applications, there are two objectives in the task
of classification; one is decreasing the test cost, the other is improving the classification …

Test-cost sensitive classification on data with missing values

Q Yang, C Ling, X Chai, R Pan - IEEE Transactions on …, 2006 - ieeexplore.ieee.org
In the area of cost-sensitive learning, inductive learning algorithms have been extended to
handle different types of costs to better represent misclassification errors. Most of the …

Attribute reduction of data with error ranges and test costs

F Min, W Zhu - Information Sciences, 2012 - Elsevier
In data mining applications, we have a number of measurement methods to obtain a data
item with different test costs and different error ranges. Test costs refer to time, money, or …

Attribute group for attribute reduction

Y Chen, K Liu, J Song, H Fujita, X Yang, Y Qian - Information Sciences, 2020 - Elsevier
In the field of rough set, how to improve the efficiency of obtaining reduct has been paid
much attention to. One of the typical strategies is to reduce the number of comparisons …

A cost sensitive decision tree algorithm based on weighted class distribution with batch deleting attribute mechanism

H Zhao, X Li - Information Sciences, 2017 - Elsevier
Minimal cost classification is an important issue in data mining and machine learning.
Recently, many enhanced algorithms based on the C4. 5 algorithm have been proposed to …

Accelerator for multi-granularity attribute reduction

Z Jiang, X Yang, H Yu, D Liu, P Wang, Y Qian - Knowledge-Based Systems, 2019 - Elsevier
By considering the information granulation in Granular Computing, the concept of the multi-
granularity is important. It is mainly because different results of information granulation will …

Wrapper framework for test-cost-sensitive feature selection

L Jiang, G Kong, C Li - IEEE Transactions on Systems, Man …, 2019 - ieeexplore.ieee.org
Feature selection is an optional preprocessing procedure and is frequently used to improve
the classification accuracy of a machine learning algorithm by removing irrelevant and/or …

Feature value acquisition in testing: a sequential batch test algorithm

VS Sheng, CX Ling - Proceedings of the 23rd international conference …, 2006 - dl.acm.org
In medical diagnosis, doctors often have to order sets of medical tests in sequence in order
to make an accurate diagnosis of patient diseases. While doing so they have to make a …

" Missing is useful": Missing values in cost-sensitive decision trees

S Zhang, Z Qin, CX Ling… - IEEE transactions on …, 2005 - ieeexplore.ieee.org
Many real-world data sets for machine learning and data mining contain missing values and
much previous research regards it as a problem and attempts to impute missing values …

Attribute reduction via local conditional entropy

Y Wang, X Chen, K Dong - International Journal of Machine Learning and …, 2019 - Springer
In rough set theory, the concept of conditional entropy has been widely accepted for
studying the problem of attribute reduction. If a searching strategy is given to find reduct …