Characterisation of depression and anxiety symptoms utilising longitutindal speech and facial expression data
Over recent years the body of literature linking changes in speech and facial expression to depression has been steadily growing. This knowledge is, however, limited due to the cross-sectional nature of the databases involved. Without longitudinal collected datasets it has been impossible to research how depression manifestations in speech and facial expression change over time. The proposed PhD position will utilise a large longitudinal audiovisual dataset currently being collected by Thymia AI, and develop machine learning models that characterise depression and anxiety by symptoms such as fatigue, mood and cognition. Such models are yet to be developed in speech/facial health research. The starting point for this analysis will be sequence clustering procedures including recurrent autoencoders and multivariate changepoint detection techniques based on deep learning architectures. The candidate will also explore fusion paradigms based on attentive fusion paradigms that dynamically model to changes in the reliability of different signals over time.
Datasets for the project have either been collected, or are currently being collected. There are no ethical or governance issues surrounding the use of this data.
Machine Learning, Speech Processing, Computing Vision, mHealth, Depression, Anxiety