➡️ Visual Device Recognition
The first step was making the app smart enough to recognize what the user is holding. Using ML Kit, the app detects the type of medical device, whether it’s a specific injection pen or a blood collection tool. This ensures the app delivers the correct instructions tailored to that exact device.
➡️ Pose Detection
Correct body posture is critical for safe injections. We integrated MediaPipe, Google’s cross-platform framework, to detect the user’s overall pose, such as arm placement and angle, so the app can guide users to adjust their posture if needed before proceeding.
➡️ Holding Position Detection
To prevent misalignment or incorrect application, we used OpenCV to analyze the angle and contact point between the device and the user’s body. The system checks whether the device is positioned in the right spot before allowing the user to continue.
➡️ Holding Time Detection
Accuracy isn’t just about position, it’s also about timing. This feature tracks how long the device is held against the skin to make sure the injection or blood collection process is fully completed. It helps reduce errors from early removal or rushing.
➡️ De-capping Detection
Before use, injection pens or collection tools need to be uncapped. We trained the system using TensorFlow and Ultralytics YOLO to identify whether the cap is still on. If the device isn’t ready, the app gently alerts the user to correct it before proceeding.
➡️ Step-by-Step Visual Guidance
Every interaction is designed to feel intuitive. Clear visuals, animations, and voice prompts walk users through each step in real-time. Whether someone is using it for the first time or managing a routine, the app ensures they feel confident and supported throughout.