# Technology of the sound and image, AUTH > Speech/Music classification of audio files using machine learning techniques. ## Clone Clone this repo to your local machine using git: ```bash git clone https://github.com/laserscout/THE-Assignment.git ``` ## Dependencies The way we recommend you run the scripts in this repository, in order to avoid python v.2/3 incompatibilities and/or other uncomfortable code breakage is setting up and using a virtual environment using python's module `venv` (or any other preferred) as described bellow: ```bash cd THE-Assignment/classifier/ python3 -m venv myenv source myenv/bin/activate pip install -U scikit-learn pip install --upgrade pandas pip install numpy pip install seaborn pip install scipy pip install essentia ``` ## Feature extraction The file `feature_extraction/feature_extractor` is a python module that uses the open-source library [Essentia](http://essentia.upf.edu/documentation/index.html) to extract audio features from an audio file in the path specified in the first parameter and save the features' values to a json file in the path specified in the second parameter. The module can be imported or executed as a script using the following command: ```bash python feature_extractor.py ``` A python script is also provided for a batch feature extraction. The script can be executed using the following command: ```bash python batch_feature_extractor.py ``` ## Data preprocessing The file `preprocessing/data_preprocessing` is a python module that uses the open-source library [scikit-learn](https://scikit-learn.org/stable/) to perform several data preprocessing techniques to the data previously extracted. The module can be imported or executed as a script using the following command: ```bash python data_preprocessing.py ``` ## Model training The file `training/model_training` is a python module that uses the open-source library [scikit-learn](https://scikit-learn.org/stable/) to train several different models and one ensembles (Random Forest). The module can be imported or executed as a script using the following command: ```bash python model_training.py ``` Where: - *dataset_pickle* is the pandas pickle (.pkl) file of the dataset dataframe saved on the disk. This file is generated by the data_preprocessing module. - *model_selection* is a string denoting which model the script should use. It can be one of svm (SVM model), dtree (Decision tree), nn (Multi-layer Perceptron), bayes (Naive Bayes), rndForest (Random Forest). ## Pipelines (putting it all together) An example of how to use all the modules and functions provided can be seen reading the file `pipeline.py`. ## Support Reach out to us: - [apostolof's email](mailto:apotwohd@gmail.com "apotwohd@gmail.com") - [christina284's email](mailto:christtk@auth.gr "christtk@auth.gr") - [laserscout's email](mailto:frankgou@auth.gr "frankgou@auth.gr") ## License [![Beerware License](https://img.shields.io/badge/license-beerware%20%F0%9F%8D%BA-blue.svg)](https://github.com/laserscout/THE-Assignment/blob/master/LICENSE.md)