We live in an era in which the average life expectancy has increased, resulting directly in a growth each time more significant of senior population. This phenomenon makes that more and more elderly people live at home alone and have difficulty finding people who can attend and help in their daily lives. It is also recognized that activity and physical exercise are essential to maintain and promote health, in particular in this age group, allowing to, in general, improve the quality of life and people’s safety. In this way, many commercial systems have been developed in order to monitor people’s activity, in particular elderly, to follow what they do during the day, collecting statistical data of activity, as well as identifying potential dangerous situations (e.g. falls). However, the existing solutions involve, usually, the acquisition of equipment with generally high cost and they base their operation in the analysis of just one kind of information or physical phenomenon (e.g. accelerometer, scene analysis, etc.), which limits, in this way, the results and the monitoring fidelity.
In this context, the work presented here proposes the use of an economic monitoring system, based in the fusion of several kinds of information (e.g. physical activity, user’s location, etc.) collected in a smart place set for this purpose. This system combines information collected and processed by several components. It uses, for example, mobile devices that are ordinary nowadays and that are available for reasonable costs (cf. smartphones) and that are equipped with a set of sensors and adequate processing abilities; it also uses other components commonly existent in residential contexts (cf. computer and low resolution cameras oriented to the places to monitor) and that can be integrated and reused in the proposed solution.
This dissertation proposes and describes the physical and logical structure of the PADS (Personal Activity Detection System) system. The developed prototype is organized in several components that are able to collect and fuse different kinds of information: i) an Android application that identifies and records 5 distinctive physical activities (cf. standing, walking, running, lying down and fall); ii) an application developed in C++ to detect the user’s presence, based in the face recognition; iii) an application that uses Google Cloud services to consult the user’s drug doses that are scheduled; iv) an application in C++ that fuses all the information obtained by the previous modules and allows to determinate the user’s activity. It is presented the global architecture of the system and the various involved components, describing with detail all of the algorithms and the details of implementation.
We also proceeded to the evaluation of the detection of activities module, as well as the fusion application. We sought to analyze and compare different techniques of activities determination that could be considered as an alternative to the applied option in this project. We also analyzed other works related to activity detection, recurring to several information sources, trying to compare several sides and strengths of each alternative.
You can find the full paper at http://hdl.handle.net/10284/3825