Deep Learning-Driven Multimodal Detection and Movement Analysis of Objects in Culinary

Tahoshin Alam Ishat

公開日: 2025/8/21

Abstract

This is a research exploring existing models and fine tuning them to combine a YOLOv8 segmentation model, a LSTM model trained on hand point motion sequence and a ASR (whisper-base) to extract enough data for a LLM (TinyLLaMa) to predict the recipe and generate text creating a step by step guide for the cooking procedure. All the data were gathered by the author for a robust task specific system to perform best in complex and challenging environments proving the extension and endless application of computer vision in daily activities such as kitchen work. This work extends the field for many more crucial task of our day to day life.

Deep Learning-Driven Multimodal Detection and Movement Analysis of Objects in Culinary | SummarXiv | SummarXiv