{ "data_id": "1489", "name": "phoneme", "exact_name": "phoneme", "version": 1, "version_label": null, "description": "**Author**: Dominique Van Cappel, THOMSON-SINTRA \n**Source**: [KEEL](http:\/\/sci2s.ugr.es\/keel\/dataset.php?cod=105#sub2), [ELENA](https:\/\/www.elen.ucl.ac.be\/neural-nets\/Research\/Projects\/ELENA\/databases\/REAL\/phoneme\/) - 1993 \n**Please cite**: None \n\nThe aim of this dataset is to distinguish between nasal (class 0) and oral sounds (class 1). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies): AHi\/Ene. The phonemes are transcribed as follows: sh as in she, dcl as in dark, iy as the vowel in she, aa as the vowel in dark, and ao as the first vowel in water. \n\n### Source\n\nThe current dataset was formatted by the KEEL repository, but originally hosted by the [ELENA Project](https:\/\/www.elen.ucl.ac.be\/neural-nets\/Research\/Projects\/ELENA\/elena.htm#stuff). The dataset originates from the European ESPRIT 5516 project: ROARS. The aim of this project was the development and the implementation of a real time analytical system for French and Spanish speech recognition. \n\n### Relevant information\n\nMost of the already existing speech recognition systems are global systems (typically Hidden Markov Models and Time Delay Neural Networks) which recognizes signals and do not really use the speech\nspecificities. On the contrary, analytical systems take into account the articulatory process leading to the different phonemes of a given language, the idea being to deduce the presence of each of the\nphonetic features from the acoustic observation.\n\nThe main difficulty of analytical systems is to obtain acoustical parameters sufficiantly reliable. These acoustical measurements must :\n\n - contain all the information relative to the concerned phonetic feature.\n - being speaker independent.\n - being context independent.\n - being more or less robust to noise.\n\nThe primary acoustical observation is always voluminous (spectrum x N different observation moments) and classification cannot been processed directly.\n\nIn ROARS, the initial database is provided by cochlear spectra, which may be seen as the output of a filters bank having a constant DeltaF\/F0, where the central frequencies are distributed on a\nlogarithmic scale (MEL type) to simulate the frequency answer of the auditory nerves. The filters outputs are taken every 2 or 8 msec (integration on 4 or 16 msec) depending on the type of phoneme\nobserved (stationary or transitory). \n\nThe aim of the present database is to distinguish between nasal and\noral vowels. There are thus two different classes:\n\n- Class 0 : Nasals \n- Class 1 : Orals \n\nThis database contains vowels coming from 1809 isolated syllables (for example: pa, ta, pan,...). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies): AHi\/Ene. Each harmonic is signed: positive when it corresponds to a local maximum of the spectrum and negative otherwise.\n\nThree observation moments have been kept for each vowel to obtain 5427 different instances: \n\n - the observation corresponding to the maximum total energy Ene. \n \n - the observations taken 8 msec before and 8 msec after the observation corresponding to this maximum total energy.\n\nFrom these 5427 initial values, 23 instances for which the amplitude of the 5 first harmonics was zero were removed, leading to the 5404 instances of the present database. The patterns are presented in a random order.\n\n### Past Usage \n\nAlinat, P., Periodic Progress Report 4, ROARS Project ESPRIT II- Number 5516, February 1993, Thomson report TS. ASM 93\/S\/EGS\/NC\/079 \n \nGuerin-Dugue, A. and others, Deliverable R3-B4-P - Task B4: Benchmarks, Technical report, Elena-NervesII \"Enhanced Learning for Evolutive Neural Architecture\", ESPRIT-Basic Research Project Number 6891, June 1995 \n\nVerleysen, M. and Voz, J.L. and Thissen, P. and Legat, J.D., A statistical Neural Network for high-dimensional vector classification, ICNN'95 - IEEE International Conference on Neural Networks, November 1995, Perth, Western Australia. \n \nVoz J.L., Verleysen M., Thissen P. and Legat J.D., Suboptimal Bayesian classification by vector quantization with small clusters. ESANN95-European Symposium on Artificial Neural Networks, April 1995, M. Verleysen editor, D facto publications, Brussels, Belgium. \n \nVoz J.L., Verleysen M., Thissen P. and Legat J.D., A practical view of suboptimal Bayesian classification, IWANN95-Proceedings of the International Workshop on Artificial Neural Networks, June 1995, Mira, Cabestany, Prieto editors, Springer-Verlag Lecture Notes in Computer Sciences, Malaga, Spain", "format": "ARFF", "uploader": "Rafael Gomes Mantovani", "uploader_id": 64, "visibility": "public", "creator": null, "contributor": null, "date": "2015-05-25 19:34:17", "update_comment": null, "last_update": "2015-11-09 20:25:20", "licence": "Public", "status": "active", "error_message": null, "url": "https:\/\/www.openml.org\/data\/download\/1592281\/php8Mz7BG", "default_target_attribute": "Class", "row_id_attribute": null, "ignore_attribute": null, "runs": 218957, "suggest": { "input": [ "phoneme", "The aim of this dataset is to distinguish between nasal (class 0) and oral sounds (class 1). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies): AHi\/Ene. The phonemes are transcribed as follows: sh as in she, dcl as in dark, iy as the vowel in she, aa as the vowel in dark, and ao as the first vowel in water. ### Source The current dataset was formatted by " ], "weight": 5 }, "qualities": { "NumberOfInstances": 5404, "NumberOfFeatures": 6, "NumberOfClasses": 2, "NumberOfMissingValues": 0, "NumberOfInstancesWithMissingValues": 0, "NumberOfNumericFeatures": 5, "NumberOfSymbolicFeatures": 1, "kNN1NKappa": 0.6877227093332758, "MajorityClassSize": 3818, "MinAttributeEntropy": null, "Quartile2KurtosisOfNumericAtts": -0.3066496518690336, "REPTreeDepth2Kappa": 0.5893753379948957, "ClassEntropy": 0.8731822577241406, "MaxAttributeEntropy": null, "MinKurtosisOfNumericAtts": -0.8572834809909309, "Quartile2MeansOfNumericAtts": 3.3308660617082098e-9, "REPTreeDepth3AUC": 0.8756328290298097, "DecisionStumpAUC": 0.7404866739285669, "MaxKurtosisOfNumericAtts": 1.7651174033134938, "MinMeansOfNumericAtts": -6.698741675291565e-8, "Quartile2MutualInformation": null, "REPTreeDepth3ErrRate": 0.1659881569207994, "DecisionStumpErrRate": 0.24740932642487046, "MaxMeansOfNumericAtts": 6.1065877123851385e-9, "MinMutualInformation": null, "Quartile2SkewnessOfNumericAtts": 0.4842310006900062, "REPTreeDepth3Kappa": 0.5893753379948957, "DecisionStumpKappa": 0.4488154160690855, "MaxMutualInformation": null, "MinNominalAttDistinctValues": 2, "PercentageOfBinaryFeatures": 16.666666666666664, "Quartile2StdDevOfNumericAtts": 1.000000003087631, "RandomTreeDepth1AUC": 0.8065602505421655, "Dimensionality": 0.0011102886750555144, "MaxNominalAttDistinctValues": 2, "MinSkewnessOfNumericAtts": 0.2094948505005711, "PercentageOfInstancesWithMissingValues": 0, "Quartile3AttributeEntropy": null, "RandomTreeDepth1ErrRate": 0.15562546262028126, "EquivalentNumberOfAtts": null, "MaxSkewnessOfNumericAtts": 1.482393052784835, "MinStdDevOfNumericAtts": 0.9999999984765442, "PercentageOfMissingValues": 0, "Quartile3KurtosisOfNumericAtts": 1.6617568775963694, "AutoCorrelation": 0.5918933925596891, "RandomTreeDepth1Kappa": 0.6203230053600308, "J48.00001.AUC": 0.8678687005272036, "J48.00001.ErrRate": 0.16894892672094744, "MaxStdDevOfNumericAtts": 1.0000000160707985, "MinorityClassPercentage": 29.348630643967432, "PercentageOfNumericFeatures": 83.33333333333334, "Quartile3MeansOfNumericAtts": 5.829015545010302e-9, "CfsSubsetEval_DecisionStumpAUC": 0.8550177462963318, "RandomTreeDepth2AUC": 0.8065602505421655, "J48.00001.Kappa": 0.5893493980716988, "MeanAttributeEntropy": null, "MinorityClassSize": 1586, "PercentageOfSymbolicFeatures": 16.666666666666664, "Quartile3MutualInformation": null, "CfsSubsetEval_DecisionStumpErrRate": 0.1815321983715766, "RandomTreeDepth2ErrRate": 0.15562546262028126, "J48.0001.AUC": 0.8678687005272036, "MeanKurtosisOfNumericAtts": 0.33974898860956887, "NaiveBayesAUC": 0.8173487304115306, "Quartile1AttributeEntropy": null, "Quartile3SkewnessOfNumericAtts": 1.363664264477732, "CfsSubsetEval_DecisionStumpKappa": 0.5579430305860378, "RandomTreeDepth2Kappa": 0.6203230053600308, "J48.0001.ErrRate": 0.16894892672094744, "MeanMeansOfNumericAtts": -1.2287194680574282e-8, "NaiveBayesErrRate": 0.24037749814951886, "Quartile1KurtosisOfNumericAtts": -0.6590595801379302, "Quartile3StdDevOfNumericAtts": 1.000000010267071, "CfsSubsetEval_NaiveBayesAUC": 0.8550177462963318, "RandomTreeDepth3AUC": 0.8065602505421655, "RandomTreeDepth3ErrRate": 0.15562546262028126, "J48.0001.Kappa": 0.5893493980716988, "MeanMutualInformation": null, "NaiveBayesKappa": 0.46338464605596114, "Quartile1MeansOfNumericAtts": -3.821243527730011e-8, "REPTreeDepth1AUC": 0.8756328290298097, "CfsSubsetEval_NaiveBayesErrRate": 0.1815321983715766, "RandomTreeDepth3Kappa": 0.6203230053600308, "J48.001.AUC": 0.8678687005272036, "MeanNoiseToSignalRatio": null, "NumberOfBinaryFeatures": 1, "Quartile1MutualInformation": null, "REPTreeDepth1ErrRate": 0.1659881569207994, "CfsSubsetEval_NaiveBayesKappa": 0.5579430305860378, "StdvNominalAttDistinctValues": 0, "J48.001.ErrRate": 0.16894892672094744, "MeanNominalAttDistinctValues": 2, "Quartile1SkewnessOfNumericAtts": 0.34288338170102217, "REPTreeDepth1Kappa": 0.5893753379948957, "CfsSubsetEval_kNN1NAUC": 0.8550177462963318, "kNN1NAUC": 0.8367145868412517, "J48.001.Kappa": 0.5893493980716988, "MeanSkewnessOfNumericAtts": 0.7794652586095029, "Quartile1StdDevOfNumericAtts": 0.9999999997309185, "REPTreeDepth2AUC": 0.8756328290298097, "CfsSubsetEval_kNN1NErrRate": 0.1815321983715766, "kNN1NErrRate": 0.12675795706883788, "MajorityClassPercentage": 70.65136935603256, "MeanStdDevOfNumericAtts": 1.000000004616722, "Quartile2AttributeEntropy": null, "REPTreeDepth2ErrRate": 0.1659881569207994, "CfsSubsetEval_kNN1NKappa": 0.5579430305860378 }, "tags": [ { "tag": "Chemistry", "uploader": "38960" }, { "tag": "Life Science", "uploader": "38960" }, { "tag": "OpenML-CC18", "uploader": "1" }, { "tag": "OpenML100", "uploader": "348" }, { "tag": "speech recognition", "uploader": "2" }, { "tag": "study_123", "uploader": "3886" }, { "tag": "study_14", "uploader": "64" }, { "tag": "study_218", "uploader": "869" }, { "tag": "study_34", "uploader": "1" }, { "tag": "study_50", "uploader": "64" }, { "tag": "study_52", "uploader": "64" }, { "tag": "study_7", "uploader": "64" }, { "tag": "study_98", "uploader": "1935" }, { "tag": "study_99", "uploader": "1" }, { "tag": "study_225", "uploader": "0" }, { "tag": "study_236", "uploader": "0" }, { "tag": "study_271", "uploader": "0" }, { "tag": "study_240", "uploader": "0" }, { "tag": "study_253", "uploader": "0" }, { "tag": "study_379", "uploader": "0" }, { "tag": "study_275", "uploader": "0" } ], "features": [ { "name": "Class", "index": "5", "type": "nominal", "distinct": "2", "missing": "0", "target": "1", "distr": [ [ "1", "2" ], [ [ "3818", "0" ], [ "0", "1586" ] ] ] }, { "name": "V1", "index": "0", "type": "numeric", "distinct": "5336", "missing": "0", "min": "-3", "max": "4", "mean": "0", "stdev": "1" }, { "name": "V2", "index": "1", "type": "numeric", "distinct": "5312", "missing": "0", "min": "-3", "max": "4", "mean": "0", "stdev": "1" }, { "name": "V3", "index": "2", "type": "numeric", "distinct": "5308", "missing": "0", "min": "-3", "max": "3", "mean": "0", "stdev": "1" }, { "name": "V4", "index": "3", "type": "numeric", "distinct": "5336", "missing": "0", "min": "-2", "max": "3", "mean": "0", "stdev": "1" }, { "name": "V5", "index": "4", "type": "numeric", "distinct": "4499", "missing": "0", "min": "-2", "max": "5", "mean": "0", "stdev": "1" } ], "nr_of_issues": 0, "nr_of_downvotes": 0, "nr_of_likes": 6, "nr_of_downloads": 41, "total_downloads": 52, "reach": 47, "reuse": 30, "impact_of_reuse": 0, "reach_of_reuse": 5, "impact": 32 }