Frank Blanning
ecad9a1d03
|
6 years ago | |
---|---|---|
classifier | 6 years ago | |
dataset | 6 years ago | |
presentation | 6 years ago | |
report | 6 years ago | |
LICENSE.md | 6 years ago | |
README.md | 6 years ago |
README.md
Technology of the sound and image, AUTH
Speech/Music classification of audio files using machine learning techniques.
Clone
Clone this repo to your local machine using git:
git clone https://github.com/laserscout/THE-Assignment.git
Dependencies
The way we recommend you run the scripts in this repository, in order to avoid python v.2/3 incompatibilities and/or other uncomfortable code breakage is setting up and using a virtual environment using python's module venv
(or any other preferred) as described bellow:
First make sure that you have python3
, venv
for your version of python3, and pip
for python3 on your machine. We have tested that it works on Ubuntu 18.04 and python3.6.
Then you can install the dependencies just in the virtual environment by:
cd THE-Assignment/classifier/
python3 -m venv myenv
source myenv/bin/activate
pip install numpy
pip install scipy
pip install scikit-learn
pip install pandas
pip install seaborn
pip install essentia
Obtaining a data set
In case you wish to use the GTZAN data set that we also used, you can run the downloadDataSet.sh script. Or, you can use your own.
Feature extraction
The file feature_extraction/feature_extractor
is a python module that uses the open-source library Essentia to extract audio features from an audio file in the path specified in the first parameter and save the features' values to a json file in the path specified in the second parameter.
The module can be imported or executed as a script using the following command:
python feature_extractor.py <audio_file_path> <extracted_features_file_path> <audio_file_sample_rate>
A python script is also provided for a batch feature extraction. The script can be executed using the following command:
python batch_feature_extractor.py <audio_files_directory/> <feature_files_directory/> <audio_files_sample_rate>
Data preprocessing
The file preprocessing/data_preprocessing
is a python module that uses the open-source library scikit-learn to perform several data preprocessing techniques to the data previously extracted.
The module can be imported or executed as a script using the following command:
python data_preprocessing.py <music_data_directory> <speech_data_directory>
Model training
The file training/model_training
is a python module that uses the open-source library scikit-learn to train several different models and one ensembles (Random Forest).
The module can be imported or executed as a script using the following command:
python model_training.py <dataset_pickle> <model_selection>
Where:
- dataset_pickle is the pandas pickle (.pkl) file of the dataset dataframe saved on the disk. This file is generated by the data_preprocessing module.
- model_selection is a string denoting which model the script should use. It can be one of svm (SVM model), dtree (Decision tree), nn (Multi-layer Perceptron), bayes (Naive Bayes), rndForest (Random Forest).
Pipelines (putting it all together)
An example of how to use all the modules and functions provided can be seen reading the file pipeline.py
.
Support
Reach out to us: