{ "data_id": "43437", "name": "Gender-Recognition-by-Voice", "exact_name": "Gender-Recognition-by-Voice", "version": 1, "version_label": "v1.0", "description": "Voice Gender\nGender Recognition by Voice and Speech Analysis\nThis database was created to identify a voice as male or female, based upon acoustic properties of the voice and speech. The dataset consists of 3,168 recorded voice samples, collected from male and female speakers. The voice samples are pre-processed by acoustic analysis in R using the seewave and tuneR packages, with an analyzed frequency range of 0hz-280hz (human vocal range).\nThe Dataset\nThe following acoustic properties of each voice are measured and included within the CSV:\n\nmeanfreq: mean frequency (in kHz)\nsd: standard deviation of frequency\nmedian: median frequency (in kHz)\nQ25: first quantile (in kHz)\nQ75: third quantile (in kHz)\nIQR: interquantile range (in kHz)\nskew: skewness (see note in specprop description)\nkurt: kurtosis (see note in specprop description)\nsp.ent: spectral entropy\nsfm: spectral flatness\nmode: mode frequency\ncentroid: frequency centroid (see specprop)\npeakf: peak frequency (frequency with highest energy)\nmeanfun: average of fundamental frequency measured across acoustic signal\nminfun: minimum fundamental frequency measured across acoustic signal\nmaxfun: maximum fundamental frequency measured across acoustic signal\nmeandom: average of dominant frequency measured across acoustic signal\nmindom: minimum of dominant frequency measured across acoustic signal\nmaxdom: maximum of dominant frequency measured across acoustic signal\ndfrange: range of dominant frequency measured across acoustic signal\nmodindx: modulation index. Calculated as the accumulated absolute difference between adjacent measurements of fundamental frequencies divided by the frequency range\nlabel: male or female\n\nAccuracy\nBaseline (always predict male)\n50 \/ 50\nLogistic Regression\n97 \/ 98\nCART\n96 \/ 97\nRandom Forest\n100 \/ 98\nSVM\n100 \/ 99\nXGBoost\n100 \/ 99\nResearch Questions\nAn original analysis of the data-set can be found in the following article: \nIdentifying the Gender of a Voice using Machine Learning\nThe best model achieves 99 accuracy on the test set. According to a CART model, it appears that looking at the mean fundamental frequency might be enough to accurately classify a voice. However, some male voices use a higher frequency, even though their resonance differs from female voices, and may be incorrectly classified as female. To the human ear, there is apparently more than simple frequency, that determines a voice's gender.\nQuestions\n\nWhat other features differ between male and female voices?\nCan we find a difference in resonance between male and female voices?\nCan we identify falsetto from regular voices? (separate data-set likely needed for this)\nAre there other interesting features in the data?\n\nCART Diagram\n\nMean fundamental frequency appears to be an indicator of voice gender, with a threshold of 140hz separating male from female classifications.\nReferences\nThe Harvard-Haskins Database of Regularly-Timed Speech\nTelecommunications Signal Processing Laboratory (TSP) Speech Database at McGill University, Home\nVoxForge Speech Corpus, Home\nFestvox CMU_ARCTIC Speech Database at Carnegie Mellon University", "format": "arff", "uploader": "Dustin Carrion", "uploader_id": 30123, "visibility": "public", "creator": null, "contributor": null, "date": "2022-03-23 13:20:48", "update_comment": null, "last_update": "2022-03-23 13:20:48", "licence": "CC BY-NC-SA 4.0", "status": "active", "error_message": null, "url": "https:\/\/www.openml.org\/data\/download\/22102262\/dataset", "default_target_attribute": null, "row_id_attribute": null, "ignore_attribute": null, "runs": 0, "suggest": { "input": [ "Gender-Recognition-by-Voice", "Voice Gender Gender Recognition by Voice and Speech Analysis This database was created to identify a voice as male or female, based upon acoustic properties of the voice and speech. The dataset consists of 3,168 recorded voice samples, collected from male and female speakers. The voice samples are pre-processed by acoustic analysis in R using the seewave and tuneR packages, with an analyzed frequency range of 0hz-280hz (human vocal range). The Dataset The following acoustic properties of each vo " ], "weight": 5 }, "qualities": { "NumberOfInstances": 3168, "NumberOfFeatures": 21, "NumberOfClasses": null, "NumberOfMissingValues": 0, "NumberOfInstancesWithMissingValues": 0, "NumberOfNumericFeatures": 20, "NumberOfSymbolicFeatures": 0, "Dimensionality": 0.006628787878787879, "PercentageOfNumericFeatures": 95.23809523809523, "MajorityClassPercentage": null, "PercentageOfSymbolicFeatures": 0, "MajorityClassSize": null, "MinorityClassPercentage": null, "MinorityClassSize": null, "NumberOfBinaryFeatures": 0, "PercentageOfBinaryFeatures": 0, "PercentageOfInstancesWithMissingValues": 0, "AutoCorrelation": null, "PercentageOfMissingValues": 0 }, "tags": [ { "uploader": "38960", "tag": "Computer Systems" }, { "uploader": "38960", "tag": "Machine Learning" } ], "features": [ { "name": "meanfreq", "index": "0", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "sd", "index": "1", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "median", "index": "2", "type": "numeric", "distinct": "3077", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "Q25", "index": "3", "type": "numeric", "distinct": "3103", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "Q75", "index": "4", "type": "numeric", "distinct": "3034", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "IQR", "index": "5", "type": "numeric", "distinct": "3073", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "skew", "index": "6", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "35", "mean": "3", "stdev": "4" }, { "name": "kurt", "index": "7", "type": "numeric", "distinct": "3166", "missing": "0", "min": "2", "max": "1310", "mean": "37", "stdev": "135" }, { "name": "sp.ent", "index": "8", "type": "numeric", "distinct": "3166", "missing": "0", "min": "1", "max": "1", "mean": "1", "stdev": "0" }, { "name": "sfm", "index": "9", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "1", "mean": "0", "stdev": "0" }, { "name": "mode", "index": "10", "type": "numeric", "distinct": "2825", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "centroid", "index": "11", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "meanfun", "index": "12", "type": "numeric", "distinct": "3166", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "minfun", "index": "13", "type": "numeric", "distinct": "913", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "maxfun", "index": "14", "type": "numeric", "distinct": "123", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "meandom", "index": "15", "type": "numeric", "distinct": "2999", "missing": "0", "min": "0", "max": "3", "mean": "1", "stdev": "1" }, { "name": "mindom", "index": "16", "type": "numeric", "distinct": "77", "missing": "0", "min": "0", "max": "0", "mean": "0", "stdev": "0" }, { "name": "maxdom", "index": "17", "type": "numeric", "distinct": "1054", "missing": "0", "min": "0", "max": "22", "mean": "5", "stdev": "4" }, { "name": "dfrange", "index": "18", "type": "numeric", "distinct": "1091", "missing": "0", "min": "0", "max": "22", "mean": "5", "stdev": "4" }, { "name": "modindx", "index": "19", "type": "numeric", "distinct": "3079", "missing": "0", "min": "0", "max": "1", "mean": "0", "stdev": "0" }, { "name": "label", "index": "20", "type": "string", "distinct": "2", "missing": "0" } ], "nr_of_issues": 0, "nr_of_downvotes": 0, "nr_of_likes": 0, "nr_of_downloads": 0, "total_downloads": 0, "reach": 0, "reuse": 0, "impact_of_reuse": 0, "reach_of_reuse": 0, "impact": 0 }