In situ contextual healthcare data capture with lightweight interaction techniques
When capturing rich sensor data about health, patient context is vital but challenging to accurately infer. There is a desire to understand the ‘ground truth’ of sensor data so that we may contextualise it, and train models to detect events and behaviour. Although retrospective approaches such as interviews are valuable, they can provide inaccurate data. This means that there is a desire to capture contextual information in situ, for instance by sending a user a questionnaire asking what they are doing at the time of a given sensor value reading. This, however, introduces an interaction burden, meaning that the user may not respond in the moment, or provides insufficient data to infer context. This is a particular challenge for users with impairments (e.g., cognitive or language), or are older, less tech-savvy individuals. This means that we are unable to capture contextual health data about those who might need the support most.
This project will explore novel interaction techniques to support lightweight and accessible capture of ground truth data in situ. The project will explore how ubiquitous technologies, such as smartphones, smartwatches and other body-worn and IoT devices can be used to support users reporting their contextual data in a more accessible manner. The project will explore input methods that incorporate, but extend, typed text and speech. Structured and accessible approaches to capture contextual data will be developed. Accessible questioning will be supported by prompts, which utilise sensor data and other context-supporting data, such as photos and videos taken. For instance, we may better triangulate data by interfering that a user’s pulse has risen, GPS data, a picture of the sea, and that that they just spoke “walk on beach” into their phone.
In situ-data capture, accessibility, machine learning, interaction techniques, human computer interaction