With the increase in life expectancy, one of the most important topic for scientific research, especially for the elderly, is good nutrition.
In particular, with an advanced age and health issues because disorders such as Alzheimer and dementia, monitoring the subjects' dietary habits to avoid excessive or poor nutrition is a critical role.
It is possible to consider the following issues and typologies:
Wearable sensors: usually represented by accelerometers positioned on human body parts, equipped with a battery and a wireless communication interface. The use of this technology requires the collaboration of the subjects wearing the device and recharging the battery, with the assumption that subjects affected by neurological problems could find it difficult to carry out these procedures.
Ambient sensors: magnetic sensors for doors and windows, or bed and armchair sensors, which provide information on the interaction of the monitored subject with the objects. In contrast to the wearable sensors, this technology does not require subject collaboration.
Video cameras: the classic technology for monitoring domestic and non-domestic environments. The advantage of the cameras is to have a wider field of view than sensors. Moreover, digital technologies allow privacy preservation, since it is possible to mask faces or other personal details.
The use of this type of sensor could generate problems in some situations. Firstly, because the video captured by the cameras depends on environmental lighting, sometimes strong variations in brightness make it difficult to capture images or videos of sufficient quality, and the installation of the cameras in rooms such as bathrooms or bedrooms can generate problems with privacy preservation.
Starting from an application aiming to monitor the food intake actions of people during a meal, already shown in a previously published paper, the present work describes some improvements that are able to make the application work in real time.
The considered solution exploits the Kinect v1 device that can be installed on the ceiling, in a top-down view in an effort to preserve privacy of the subjects.
The food intake actions are estimated from the analysis of depth frames. The innovations introduced in this document are related to the automatic identification of the initial and final frame for the detection of food intake actions, and to the strong revision of the procedure to identify food intake actions with respect to the original work, in order to optimize the performance of the algorithm.
Evaluation of the computational effort and system performance compared to the previous version of the application has demonstrated a possible real-time applicability of the solution presented in this document.