TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Accurate and interpretable evaluation of surgical skills f...

Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks

Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller

2019-08-20Surgical Skills EvaluationSkills EvaluationGeneral ClassificationBIG-bench Machine LearningInterpretable Machine Learning
PaperPDFCode(official)

Abstract

Purpose: Manual feedback from senior surgeons observing less experienced trainees is a laborious task that is very expensive, time-consuming and prone to subjectivity. With the number of surgical procedures increasing annually, there is an unprecedented need to provide an accurate, objective and automatic evaluation of trainees' surgical skills in order to improve surgical practice. Methods: In this paper, we designed a convolutional neural network (CNN) to classify surgical skills by extracting latent patterns in the trainees' motions performed during robotic surgery. The method is validated on the JIGSAWS dataset for two surgical skills evaluation tasks: classification and regression. Results: Our results show that deep neural networks constitute robust machine learning models that are able to reach new competitive state-of-the-art performance on the JIGSAWS dataset. While we leveraged from CNNs' efficiency, we were able to minimize its black-box effect using the class activation map technique. Conclusions: This characteristic allowed our method to automatically pinpoint which parts of the surgery influenced the skill evaluation the most, thus allowing us to explain a surgical skill classification and provide surgeons with a novel personalized feedback technique. We believe this type of interpretable machine learning model could integrate within "Operation Room 2.0" and support novice surgeons in improving their skills to eventually become experts.

Related Papers

Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis2025-06-26The Most Important Features in Generalized Additive Models Might Be Groups of Features2025-06-24Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images2025-06-17Leveraging Predictive Equivalence in Decision Trees2025-06-17Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders2025-06-13An Interpretable Machine Learning Approach in Predicting Inflation Using Payments System Data: A Case Study of Indonesia2025-06-12An Attention-based Spatio-Temporal Neural Operator for Evolving Physics2025-06-12midr: Learning from Black-Box Models by Maximum Interpretation Decomposition2025-06-10