I’m a ComSci student and I made a web app for my final year project that recommends a playlist according to the user’s captured facial expression (powered by a machine-learning model). So for instance, when the user smiles at the camera/webcam, the system will then predict that their current mood is happy and the app will display a list of tracks according to the track attributes matching the happy mood.
The main goal of this project is to provide a supplementary approach for accessibility in music applications and initiate convenience for users where promptly deciding to browse or search for a playlist can be mitigated with a feature utilizing the state-of-the-art technologies.
I’m curious to know whether this type of feature can be regarded by the developers/ end-users so give me a shoutout if you’re down with this kind of groundbreaking proposal!
The technologies used are: Python, ReactJS, Flask, and ofc Spotify API.