I am exploring if it is possible to build a deep learning classifier that is able to detect current levels of happiness (momentary happiness) from minute-long audio clips of a person talking. This would be a useful classifier for many psychologists who need to record a subject’s current level of happiness, but do not want expensive equipment to do so and may be concerned that asking subjects would bias how they acted in the study they were about to participate in (eg, if the subject detected it was about happiness, it may impact how they act). My collaborators at UBC have already collected the data for this project. This includes a total of 3,931 participant audio recordings from 502 undergraduate students across multiple days. In half these audio clips, the participants are describing their day, and in the other half, the participants are describing an image, each for ~60 seconds. I am currently implementing a deep learning approach to classify the audio recordings into 5 levels of happiness.