This study’s analytical approach has several limitations.Firstly,it lacks 3D kinematic data,meaning depth details is not captured. Secondly, the system cannot record kinematic data when the surgical tool is outside the microscope’s field of view. Additionally, the semantic segmentation model occasionally misclassifies images with shadows from surgical instruments or hands. To address this, future research shoudl include shadowed images in the training dataset to improve model robustness.
further inquiry is also needed to optimize the AI model’s network architecture, as this study used ResNet-50 and YOLOv2. Exploring alternative deep learning models or fine-tuning existing ones could enhance the accuracy and generalizability of surgical video analysis.
The study’s sample size of participating surgeons was relatively small, despite the inclusion of surgeons with varying skill levels. Moreover, data from repeated training sessions were not analyzed to assess learning curves or the impact of feedback on training effectiveness. Future studies should examine how AI-assisted feedback influences the learning curves of surgical trainees and weather real-time performance tracking leads to more efficient skill acquisition.
To provide deeper insights into the relationship between the quality of the final surgical product and technical factors, future research could incorporate leakage tests or the Anastomosis Lapse Index. This index identifies ten distinct types of anastomotic errors and can serve as a criteria-based objective assessment tool, covering a wide range of microsurgical technical aspects.