smart health RheinMain University of Applied Sciences

ActiveSense Application

ActiveSense is a wearable application for activity tracking and recognition developed at RheinMain University of Applied Sciences. Designed to streamline caregiving workflows and provide detailed insights into patient activities, ActiveSense leverages modern sensor data and machine learning models to monitor and classify physical activity in real-time. ActiveSense is built with the goal of enhancing the quality of care for patients while improving efficiency for caregivers. By wearing the ActiveSense smartwatch, patients' movements and activities are continuously tracked. This real-time monitoring offers caregivers a clear and detailed view of daily activities, which can inform personalized care plans and improve patient outcomes. ActiveSense embraces simplicity while maintaining a distinctive and cohesive brand identity. As a WearOS application, it adheres to the Material 2 Guidelines for WearOS apps, ensuring a user experience that is both familiar and intuitive. However, to strengthen its unique identity, ActiveSense incorporates custom design elements and assets that align with its core values and vision. Homepage

ActiveSense

Development of classification procedures for the automated documentation of everyday activities

The classification of human movements using smartwatches and machine learning represents an approach to support caregivers in the daily documentation of everyday activities of patients for whom it makes sense to monitor movement data for health reasons and who require assistance with this monitoring. The automation of care documentation is intended to reduce the long-term workload of care staff and improve diagnostics. This project aims to develop classification procedures for the automation of care documentation. To this end, the possibilities of sequential extraction and classification of data using smartwatch sensor technology that can describe everyday activities are being investigated. In this context, four classification concepts are presented, which should enable a differentiation and thus better identification of everyday activities. Within the concepts, a distinction is made between continuous classification, classification over a period of time, classification under real conditions and the classification of complex everyday activities. The four classification concepts are illustrated. They differ in the structure and process of classifying movement data through to a statement about the current activity, which ultimately results in documentation.

Development of an activity tracking classifier

An activity tracking classifier was developed in this project for the real-time processing of sensor data from a watch. The basis for tracking sensor data is the smartwatch application, which was implemented as a standalone app on an Apple Watch Series 7. The Apple Watch Series 7 has a touch display and, in addition to tracking, is used in particular to display and respond to notifications received on a linked iPhone. It can be operated by touching the display. The ATK offers methods for querying sensor data, temporarily storing data in the smartwatch's memory and labeling data. The ATK also has an interface for exchanging sensor data with a web server via a WebSocket. In the event of complications, backup methods can trigger a resend of the sensor data generated in a session. Accelerometer, gyroscope and magnetometer contribute to the mathematical calculation of the device orientation. This position is mapped in three-dimensional space so that the movement of the smartwatch can also be tracked live via an animation. The square (in the illustration) symbolizes the smartwatch. The (V) stands for front. The coordinate system is aligned according to the pitch, roll and yaw angles. The Z-axis (Yaw, blue here) looks out of the smartwatch and towards the test person. All incoming data packets are stored in a MYSQL database. This data aggregation makes it possible to track and label thousands of sensor data in a short time.

The VeinXam Project

A venous insufficiency, to which the deep veins of the lower human extremities are particularly susceptible, can lead to serious diseases, such as a deep vein thrombosis (DVT) with subsequent risks of severe implications, e.g., pulmonary embolism or a post-thrombotic syndrome. The current standard procedure to diagnose venous insufficiency is performed exclusively in medical offices and hospitals in the form of in-patient treatments with special medical equipment. This hurdle for the patient, combined with the often-diffuse symptoms of venous insufficiency, may lead to a late discovery of diseases such as DVTs and increase the risk of secondary, potentially life-threatening diseases as well as treatment costs. To address these issues, VeinXam proposes a novel method for self-controlled, continuous, and mobile monitoring of the current deep venous function using low-cost wearable sensor technology and a smartphone app. VeinXam adapts the Light Reflection Rheography (LRR), a proven and frequently used non-invasive diagnostic tool used to evaluate the function of lower limb blood vessels via blood flow data coming from a photoplethysmogram (PPG). The PPG data is recorded using a custom body-worn sensor unit integrating an optical analog front end with LEDs and photodiodes, a skin temperature sensor, a battery, and a system-on-chip handling all data acquisition communication tasks using Bluetooth Low Energy. The custom smartphone app then extracts all necessary vital parameters from the acquired data to assess the deep venous health state. By using intuitive design elements, animated measurement guidance, a database of the measurement data, and a health-state-based notification system, the user gets easy access to a self-controlled deep venous health monitoring system aiming to deliver critical early-stage information about pathological changes in the blood flow in the lower limbs.

Speech-Based Age Classification

The project "Speech-Based Age Classification" focuses on developing and evaluating speech recognition systems that accurately identify the age group of speakers. This capability is particularly useful in healthcare, where recognizing different age groups aids in classifying and documenting patient demographics, ensuring tailored care for each group. While speech recognition models like Apple's "Siri" and Amazon's "Alexa" have been widely used for general voice commands since the early 2010s, this project explores their potential for more sophisticated tasks like age classification using Transformer-based architectures. These architectures offer superior performance and efficiency, making them ideal for this task. The research builds on Burkhardt et al.'s work, examining how variations in speech data, model architecture, and fine-tuning methods impact classification accuracy. The project aims to optimize these factors to achieve precise age classification with minimal data input. Methodologically, the project will use datasets and pre-trained Transformer models available on platforms like Hugging Face. Models such as Wav2Vec2, Wav2Vec2 XLS-R, HuBERT, and Whisper will undergo fine-tuning and testing through transfer learning to identify the most effective models for distinguishing between different age groups in speech, improving patient categorization and management in healthcare settings.

Development of a real-time Human Activity Recgonition System

In this work a prototype of a real-time HAR system is presented, predefined activities based on the smartwatch sensor data of a of a test person in real time using a Long Short-Term Memory (LSTM) on the server side. Systematic comparisons of sensor data combinations show that the three activities of writing, eating and drinking can be with an accuracy of 98.18% using gyroscope data alone or in combination alone or in combination with acceleration data can be classified. The prototype can also classify the four very similar activities of eating, drinking, talking on the phone and blowing the nose using all the sensor data from the accelerometer, gyroscope, gravity and attitude with 90.59% classification accuracy. with 90.59 % classification accuracy. It can be seen that the significance of a sensor in classification varies depending on the specific activity. varies. Furthermore, in the context of this work, a window size of six seconds proved to be the most suitable input input size for the realized LSTM. Audio data also represent a promising feature for the classification of very similar activities. similar activities. The combinations of three or more sensors always benefited from the decibel values and were able to achieve better classifications. To summarize a live classification of activities based on smartwatch sensor smartwatch sensor data was achieved, which enables the monitoring of everyday activities.

KelsterBoard

KelsterBoard

KelsterBoard is an urban data platform designed as part of a smart city project with the aim of seeing citizens as the primary target group. Use cases are designed to support the needs of citizens. The focus is also on usability. Citizens should be able to uses the application on their own. The city of Kelsterbach is working on a prototype urban data platform in a research collaboration with RheinMain University of Applied Sciences as part of the “Starke Heimat Hessen” funding program. The focus of this development is usability, i.e. ease of use for all future users. We are very pleased to be able to present the first draft of our urban data platform with our own stand at the Smart Region Summit organized by the Hessian Ministry of Digital Affairs on 27.4.2023 in the Centralstation in Darmstadt! The event is designed for municipalities. But of course we are also happy about every other visitor to our stand!

Infodoq

There are currently around 15 self-managed, outpatient assisted living communities for people with dementia in Hesse. Due to the many complex coordination and harmonization tasks, such residential care communities are dependent on functioning communication tools. As part of the INFODOQ research project, an online platform was developed at the RheinMain University of Applied Sciences (HSRM) for these dementia residential communities, in which care is documented and communicated transparently for all those involved. The sponsor of the model project and cooperation partner in this practical research project is the Frankfurt Hans and Ilse Breuer Foundation. The project is funded by the Hessian Ministry of Social Affairs and an association of statutory health insurance companies. “With INFODOQ, we want to make care documentation less bureaucratic and at the same time create an effective and user-friendly information, communication and organizational tool for the people who contribute to the well-being of shared flat tenants on a daily basis,” explains Prof. Dr. Ludger Martin from the Department of Design, Informatics and Media at HSRM. StattHaus Offenbach, run by the Hans and Ilse Breuer Foundation, was involved in the development and evaluation of INFODOQ and has been using the platform since 2019. “The feedback on INFODOQ has been very good. In particular, it makes communication with relatives, care services and volunteers very transparent. The coordination of appointments is also excellent,” says Maren Ewald, Head of the Dementia Center, who is now even receiving inquiries from shared flats in other federal states: ”There is a great need for such applications.” During the project, it quickly became clear that INFODOQ needed to be adapted even more closely to the needs of users. While INFODOQ could initially only be used on a PC or laptop, the mobile version INFODOQ Mobile has now made the platform even more flexible and practical: “The smartphone app is much more practical for our care team,” reports Stephanie Völs, owner and managing director of the Offenbach-based care service Völs & Schikowski. In addition to providing care, the team also takes on many care activities such as trips to the zoo or cafés. The care team can use the mobile app to document the activities and feedback from participants directly and share them with relatives in real time. The aim is to integrate INFODOQ even more flexibly into users' everyday lives. Users are kept up to date with push notifications. It can also be used by voice input, as is already familiar from other communication apps. For Martin, the project will continue to grow as it is constantly developed with input from the field: “We want to add more functions to INFODOQ based on our experience so far. The plan is to integrate a digital bulletin board where photos and recipes can be uploaded or surveys created. This will make communication even more transparent, decisions can be made more quickly and any problems that arise can be resolved promptly.

Optimal 2D-LiDAR Sensor Coverage of a Room

KelsterBoard

This project focuses on developing an algorithm for optimizing 2D-LiDAR sensor placement to ensure maximum room coverage with the fewest sensors possible. The algorithm is particularly useful in smart environment applications, such as movement and fall detection in healthcare settings, security monitoring, and automated home systems. In healthcare, particularly in elderly care, fall detection is crucial. Quick and accurate detection of falls can significantly reduce response times, potentially saving lives. The algorithm ensures that the entire room is covered by strategically placing sensors, minimizing blind spots where a fall or other critical events might go undetected. In security systems, comprehensive monitoring is essential for detecting unauthorized access or movement within a space. The algorithm optimizes sensor placement to ensure that all areas of interest are continuously monitored with minimal equipment, reducing both costs and installation complexity. Additionally, in automated homes or smart buildings, where multiple devices need to interact seamlessly, the algorithm helps in maintaining consistent and reliable sensor coverage. This ensures that smart systems can accurately detect and respond to human presence or movement, enhancing both comfort and safety. By using grid-based mapping and iterative optimization, the algorithm can adapt to various room configurations and obstacles, making it a versatile tool for a wide range of indoor monitoring applications.