{ "data_id": "44075", "name": "phoneme", "exact_name": "phoneme", "version": 4, "version_label": null, "description": "Dataset used in the tabular data benchmark https:\/\/github.com\/LeoGrin\/tabular-benchmark, transformed in the same way. This dataset belongs to the \"regression on numerical features\" benchmark. Original description: \n \n**Author**: Dominique Van Cappel, THOMSON-SINTRA \n**Source**: [KEEL](http:\/\/sci2s.ugr.es\/keel\/dataset.php?cod=105#sub2), [ELENA](https:\/\/www.elen.ucl.ac.be\/neural-nets\/Research\/Projects\/ELENA\/databases\/REAL\/phoneme\/) - 1993 \n**Please cite**: None \n\nThe aim of this dataset is to distinguish between nasal (class 0) and oral sounds (class 1). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies): AHi\/Ene. The phonemes are transcribed as follows: sh as in she, dcl as in dark, iy as the vowel in she, aa as the vowel in dark, and ao as the first vowel in water. \n\n### Source\n\nThe current dataset was formatted by the KEEL repository, but originally hosted by the [ELENA Project](https:\/\/www.elen.ucl.ac.be\/neural-nets\/Research\/Projects\/ELENA\/elena.htm#stuff). The dataset originates from the European ESPRIT 5516 project: ROARS. The aim of this project was the development and the implementation of a real time analytical system for French and Spanish speech recognition. \n\n### Relevant information\n\nMost of the already existing speech recognition systems are global systems (typically Hidden Markov Models and Time Delay Neural Networks) which recognizes signals and do not really use the speech\nspecificities. On the contrary, analytical systems take into account the articulatory process leading to the different phonemes of a given language, the idea being to deduce the presence of each of the\nphonetic features from the acoustic observation.\n\nThe main difficulty of analytical systems is to obtain acoustical parameters sufficiantly reliable. These acoustical measurements must :\n\n - contain all the information relative to the concerned phonetic feature.\n - being speaker independent.\n - being context independent.\n - being more or less robust to noise.\n\nThe primary acoustical observation is always voluminous (spectrum x N different observation moments) and classification cannot been processed directly.\n\nIn ROARS, the initial database is provided by cochlear spectra, which may be seen as the output of a filters bank having a constant DeltaF\/F0, where the central frequencies are distributed on a\nlogarithmic scale (MEL type) to simulate the frequency answer of the auditory nerves. The filters outputs are taken every 2 or 8 msec (integration on 4 or 16 msec) depending on the type of phoneme\nobserved (stationary or transitory). \n\nThe aim of the present database is to distinguish between nasal and\noral vowels. There are thus two different classes:\n\n- Class 0 : Nasals \n- Class 1 : Orals \n\nThis database contains vowels coming from 1809 isolated syllables (for example: pa, ta, pan,...). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies): AHi\/Ene. Each harmonic is signed: positive when it corresponds to a local maximum of the spectrum and negative otherwise.\n\nThree observation moments have been kept for each vowel to obtain 5427 different instances: \n\n - the observation corresponding to the maximum total energy Ene. \n \n - the observations taken 8 msec before and 8 msec after the observation corresponding to this maximum total energy.\n\nFrom these 5427 initial values, 23 instances for which the amplitude of the 5 first harmonics was zero were removed, leading to the 5404 instances of the present database. The patterns are presented in a random order.\n\n### Past Usage \n\nAlinat, P., Periodic Progress Report 4, ROARS Project ESPRIT II- Number 5516, February 1993, Thomson report TS. ASM 93\/S\/EGS\/NC\/079 \n \nGuerin-Dugue, A. and others, Deliverable R3-B4-P - Task B4: Benchmarks, Technical report, Elena-NervesII \"Enhanced Learning for Evolutive Neural Architecture\", ESPRIT-Basic Research Project Number 6891, June 1995 \n\nVerleysen, M. and Voz, J.L. and Thissen, P. and Legat, J.D., A statistical Neural Network for high-dimensional vector classification, ICNN'95 - IEEE International Conference on Neural Networks, November 1995, Perth, Western Australia. \n \nVoz J.L., Verleysen M., Thissen P. and Legat J.D., Suboptimal Bayesian classification by vector quantization with small clusters. ESANN95-European Symposium on Artificial Neural Networks, April 1995, M. Verleysen editor, D facto publications, Brussels, Belgium. \n \nVoz J.L., Verleysen M., Thissen P. and Legat J.D., A practical view of suboptimal Bayesian classification, IWANN95-Proceedings of the International Workshop on Artificial Neural Networks, June 1995, Mira, Cabestany, Prieto editors, Springer-Verlag Lecture Notes in Computer Sciences, Malaga, Spain", "format": "arff", "uploader": "Leo Grin", "uploader_id": 26324, "visibility": "public", "creator": null, "contributor": "\"Leo Grin\"", "date": "2022-06-21 11:32:29", "update_comment": null, "last_update": "2022-06-21 11:32:29", "licence": "Public", "status": "active", "error_message": null, "url": "https:\/\/old.openml.org\/data\/download\/22103171\/dataset", "default_target_attribute": "Class", "row_id_attribute": null, "ignore_attribute": null, "runs": 0, "suggest": { "input": [ "phoneme", "Dataset used in the tabular data benchmark https:\/\/github.com\/LeoGrin\/tabular-benchmark, transformed in the same way. This dataset belongs to the \"regression on numerical features\" benchmark. Original description: The aim of this dataset is to distinguish between nasal (class 0) and oral sounds (class 1). Five different attributes were chosen to characterize each vowel: they are the amplitudes of the five first harmonics AHi, normalised by the total energy Ene (integrated on all the frequencies) " ], "weight": 5 }, "qualities": { "NumberOfInstances": 3172, "NumberOfFeatures": 6, "NumberOfClasses": 2, "NumberOfMissingValues": 0, "NumberOfInstancesWithMissingValues": 0, "NumberOfNumericFeatures": 5, "NumberOfSymbolicFeatures": 1, "PercentageOfBinaryFeatures": 16.666666666666664, "PercentageOfInstancesWithMissingValues": 0, "AutoCorrelation": 0.9996846420687481, "PercentageOfMissingValues": 0, "Dimensionality": 0.0018915510718789407, "PercentageOfNumericFeatures": 83.33333333333334, "MajorityClassPercentage": 50, "PercentageOfSymbolicFeatures": 16.666666666666664, "MajorityClassSize": 1586, "MinorityClassPercentage": 50, "MinorityClassSize": 1586, "NumberOfBinaryFeatures": 1 }, "tags": [ { "uploader": "38960", "tag": "Computer Systems" }, { "uploader": "38960", "tag": "Machine Learning" } ], "features": [ { "name": "Class", "index": "5", "type": "nominal", "distinct": "2", "missing": "0", "target": "1", "distr": [ [ "1", "2" ], [ [ "1586", "0" ], [ "0", "1586" ] ] ] }, { "name": "V1", "index": "0", "type": "numeric", "distinct": "3135", "missing": "0", "min": "-3", "max": "4", "mean": "0", "stdev": "1" }, { "name": "V2", "index": "1", "type": "numeric", "distinct": "3131", "missing": "0", "min": "-3", "max": "4", "mean": "0", "stdev": "1" }, { "name": "V3", "index": "2", "type": "numeric", "distinct": "3122", "missing": "0", "min": "-3", "max": "3", "mean": "0", "stdev": "1" }, { "name": "V4", "index": "3", "type": "numeric", "distinct": "3134", "missing": "0", "min": "-2", "max": "3", "mean": "0", "stdev": "1" }, { "name": "V5", "index": "4", "type": "numeric", "distinct": "2659", "missing": "0", "min": "-2", "max": "4", "mean": "0", "stdev": "1" } ], "nr_of_issues": 0, "nr_of_downvotes": 0, "nr_of_likes": 0, "nr_of_downloads": 0, "total_downloads": 0, "reach": 0, "reuse": 0, "impact_of_reuse": 0, "reach_of_reuse": 0, "impact": 0 }