Data
thyroid-ann

thyroid-ann

active ARFF Publicly available Visibility: public Uploaded 29-07-2016 by Rafael Gomes Mantovani
0 likes downloaded by 0 people , 0 total downloads 0 issues 0 downvotes
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
This directory contains Thyroid datasets. "ann-train.data" contains 3772 learning examples and "ann-test.data" contains 3428 testing examples. I have obtained this data from Daimler-Benz. This are the informations I have got together with the dataset: ------------------------------------------------------------------------------- 1. Data setp summary Number of attributes: 21 (15 attributes are binary, 6 attributes are continuous) Number of classes: 3 Number of learning examples: 3772 Number of testing examples: 3428 Data set is availbale on ASCII-file 2. Description The problem is to determine whether a patient referred to the clinic is hypothyroid. Therefore three classes are built: normal (not hypothyroid), hyperfunction and subnormal functioning. Because 92 percent of the patients are not hyperthyroid a good classifier must be significant better than 92%. Note These are the datas Quinlans used in the case study of his article "Simplifying Decision Trees" (International Journal of Man-Machine Studies (1987) 221-234) ------------------------------------------------------------------------------- Unfortunately this data differ from the one Ross Quinlan placed in "pub/machine-learning-databases/thyroid-disease" on "ics.uci.edu". I don't know any more details about the dataset. But it's hard to train Backpropagation ANNs with it. The dataset is used in two technical reports: ------------------------------------------------------------------------------- "Optimization of the Backpropagation Algorithm for Training Multilayer Perceptrons": ftp archive.cis.ohio-state.edu or ftp 128.146.8.52 cd pub/neuroprose binary get schiff.bp_speedup.ps.Z quit The report is an overview of many different backprop speedup techniques. 15 different algorithms are described in detail and compared by using a big, very hard to solve, practical data set. Learning speed and network classification performance with respect to the training data set and also with respect to a testing data set are discussed. These are the tested algorithms: backprop backprop (batch mode) backprop + Learning rate calculated by Eaton and Oliver's formula backprop + decreasing learning rate (Darken and Moody) backprop + Learning rate adaptation for each training pattern (J. Schmidhuber) backprop + evolutionarily learning rate adaptation (R. Salomon) backprop + angle driven learning rate adaptation(Chan and Fallside) Polak-Ribiere + line search (Kramer and Vincentelli) Conj. gradient + line search (Leonard and Kramer) backprop + learning rate adaptation by sign changes (Silva and Almeida) SuperSAB (T. Tollenaere) Delta-Bar-Delta (Jacobs) RPROP (Riedmiller and Braun) Quickprop (Fahlman) Cascade correlation (Fahlman) ------------------------------------------------------------------------------- "Synthesis and Performance Analysis of Multilayer eural Network Architectures": ftp archive.cis.ohio-state.edu or ftp 128.146.8.52 cd pub/neuroprose binary get schiff.gann.ps.Z quit In this paper we present various approaches for automatic topology-optimization of backpropagation networks. First of all, we review the basics of genetic algorithms which are our essential tool for a topology search. Then we give a survey of backprop and the topological properties of feedforward networks. We report on pioneer work in the filed of topology--optimization. Our first approach was based on evolutions strategies which used only mutation to change the parent's topologies. Now, we found a way to extend this approach by an crossover operator which is essential to all genetic search methods. In contrast to competing approaches it allows that two parent networks with different number of units can mate and produce a (valid) child network, which inherits genes from both of the parents. We applied our genetic algorithm to a medical classification problem which is extremly difficult to solve. The performance with respect to the training set and a test set of pattern samples was compared to fixed network topologies. Our results confirm that the topology optimization makes sense, because the generated networks outperform the fixed topologies and reach classification performances near optimum. ------------------------------------------------------------------------------- Randolf Werner (evol@infko.uni-koblenz.de)

22 features

Class (target)nominal3 unique values
0 missing
V1numeric93 unique values
0 missing
V2numeric2 unique values
0 missing
V3numeric2 unique values
0 missing
V4numeric2 unique values
0 missing
V5numeric2 unique values
0 missing
V6numeric2 unique values
0 missing
V7numeric2 unique values
0 missing
V8numeric2 unique values
0 missing
V9numeric2 unique values
0 missing
V10numeric2 unique values
0 missing
V11numeric2 unique values
0 missing
V12numeric2 unique values
0 missing
V13numeric2 unique values
0 missing
V14numeric2 unique values
0 missing
V15numeric2 unique values
0 missing
V16numeric2 unique values
0 missing
V17numeric280 unique values
0 missing
V18numeric72 unique values
0 missing
V19numeric243 unique values
0 missing
V20numeric141 unique values
0 missing
V21numeric324 unique values
0 missing

62 properties

3772
Number of instances (rows) of the dataset.
22
Number of attributes (columns) of the dataset.
3
Number of distinct values of the target attribute (if it is nominal).
0
Number of missing values in the dataset.
0
Number of instances with at least one value missing.
21
Number of numeric attributes.
1
Number of nominal attributes.
0.45
Entropy of the target attribute values.
An estimate of the amount of irrelevant information in the attributes regarding the class. Equals (MeanAttributeEntropy - MeanMutualInformation) divided by MeanMutualInformation.
Second quartile (Median) of entropy among attributes.
0.01
Number of attributes divided by the number of instances.
3
Average number of distinct values among the attributes of the nominal type.
22.53
Second quartile (Median) of kurtosis among attributes of the numeric type.
Number of attributes needed to optimally describe the class (under the assumption of independence among attributes). Equals ClassEntropy divided by MeanMutualInformation.
8.32
Mean skewness among attributes of the numeric type.
0.03
Second quartile (Median) of means among attributes of the numeric type.
92.47
Percentage of instances belonging to the most frequent class.
0.14
Mean standard deviation of attributes of the numeric type.
Second quartile (Median) of mutual information between the nominal attributes and the target attribute.
3488
Number of instances belonging to the most frequent class.
Minimal entropy among attributes.
4.8
Second quartile (Median) of skewness among attributes of the numeric type.
Maximum entropy among attributes.
-1.27
Minimum kurtosis among attributes of the numeric type.
0
Percentage of binary attributes.
0.12
Second quartile (Median) of standard deviation of attributes of the numeric type.
3772
Maximum kurtosis among attributes of the numeric type.
0
Minimum of means among attributes of the numeric type.
0
Percentage of instances having missing values.
Third quartile of entropy among attributes.
0.52
Maximum of means among attributes of the numeric type.
Minimal mutual information between the nominal attributes and the target attribute.
0
Percentage of missing values.
77.47
Third quartile of kurtosis among attributes of the numeric type.
Maximum mutual information between the nominal attributes and the target attribute.
3
The minimal number of distinct values among attributes of the nominal type.
95.45
Percentage of numeric attributes.
0.1
Third quartile of means among attributes of the numeric type.
3
The maximum number of distinct values among attributes of the nominal type.
-0.2
Minimum skewness among attributes of the numeric type.
4.55
Percentage of nominal attributes.
Third quartile of mutual information between the nominal attributes and the target attribute.
61.42
Maximum skewness among attributes of the numeric type.
0.01
Minimum standard deviation of attributes of the numeric type.
First quartile of entropy among attributes.
8.91
Third quartile of skewness among attributes of the numeric type.
0.46
Maximum standard deviation of attributes of the numeric type.
2.47
Percentage of instances belonging to the least frequent class.
9.22
First quartile of kurtosis among attributes of the numeric type.
0.2
Third quartile of standard deviation of attributes of the numeric type.
Average entropy of the attributes.
93
Number of instances belonging to the least frequent class.
0.01
First quartile of means among attributes of the numeric type.
0
Standard deviation of the number of distinct values among attributes of the nominal type.
229.97
Mean kurtosis among attributes of the numeric type.
0
Number of binary attributes.
First quartile of mutual information between the nominal attributes and the target attribute.
0.08
Mean of means among attributes of the numeric type.
2.09
First quartile of skewness among attributes of the numeric type.
0.86
Average class difference between consecutive instances.
Average mutual information between the nominal attributes and the target attribute.
0.03
First quartile of standard deviation of attributes of the numeric type.

14 tasks

31 runs - estimation_procedure: 10-fold Crossvalidation - target_feature: Class
0 runs - estimation_procedure: 33% Holdout set - target_feature: Class
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - target_feature: 1
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
0 runs - estimation_procedure: 50 times Clustering
Define a new task