Fully automated workflow for the design of patient-specific orthopaedic implants: application to total knee arthroplasty
- URL: http://arxiv.org/abs/2403.15353v2
- Date: Mon, 25 Mar 2024 09:36:42 GMT
- Title: Fully automated workflow for the design of patient-specific orthopaedic implants: application to total knee arthroplasty
- Authors: Aziliz Guezou-Philippe, Arnaud Clavé, Ehouarn Maguet, Ludivine Maintier, Charles Garraud, Jean-Rassaire Fouefack, Valérie Burdin, Eric Stindel, Guillaume Dardenne,
- Abstract summary: The proposed workflow allows for a fast and reliable personalisation of knee implants, directly from the patient CT image.
It establishes a patient-specific pre-operative planning for TKA in a very short time making it easily available for all patients.
This solution could help answer the growing number of arthroplasties while reducing complications and improving the patients' satisfaction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Arthroplasty is commonly performed to treat joint osteoarthritis, reducing pain and improving mobility. While arthroplasty has known several technical improvements, a significant share of patients are still unsatisfied with their surgery. Personalised arthroplasty improves surgical outcomes however current solutions require delays, making it difficult to integrate in clinical routine. We propose a fully automated workflow to design patient-specific implants, presented for total knee arthroplasty, the most widely performed arthroplasty in the world nowadays. The proposed pipeline first uses artificial neural networks to segment the proximal and distal extremities of the femur and tibia. Then the full bones are reconstructed using augmented statistical shape models, combining shape and landmarks information. Finally, 77 morphological parameters are computed to design patient-specific implants. The developed workflow has been trained using 91 CT scans of lower limb and evaluated on 41 CT scans manually segmented, in terms of accuracy and execution time. The workflow accuracy was $0.4\pm0.2mm$ for the segmentation, $1.2\pm0.4mm$ for the full bones reconstruction, and $2.8\pm2.2mm$ for the anatomical landmarks determination. The custom implants fitted the patients' anatomy with $0.6\pm0.2mm$ accuracy. The whole process from segmentation to implants' design lasted about 5 minutes. The proposed workflow allows for a fast and reliable personalisation of knee implants, directly from the patient CT image without requiring any manual intervention. It establishes a patient-specific pre-operative planning for TKA in a very short time making it easily available for all patients. Combined with efficient implant manufacturing techniques, this solution could help answer the growing number of arthroplasties while reducing complications and improving the patients' satisfaction.
Related papers
- Enhanced Knee Kinematics: Leveraging Deep Learning and Morphing Algorithms for 3D Implant Modeling [2.752817022620644]
This study proposes a novel approach using machine learning algorithms and morphing techniques for precise 3D reconstruction of implanted knee models.
A convolutional neural network is trained to automatically segment the femur contour of the implanted components.
A morphing algorithm generates a personalized 3D model of the implanted knee joint.
arXiv Detail & Related papers (2024-08-02T20:11:04Z) - A Semi-automatic Cranial Implant Design Tool Based on Rigid ICP Template Alignment and Voxel Space Reconstruction [2.0793077626669327]
cranioplasty is the craft of neurocranial repair using cranial implants.
Despite the improvements made in recent years, the design of a patient-specific implant (PSI) is among the most complex, expensive, and least automated tasks in cranioplasty.
We create a prototype application with a graphical user interface (UI) specifically tailored for semi-automatic implant generation.
A general outline of the proposed implant generation process involves setting an area of interest, aligning the templates, and then creating the implant in voxel space.
arXiv Detail & Related papers (2024-03-19T08:24:05Z) - An Automated Real-Time Approach for Image Processing and Segmentation of Fluoroscopic Images and Videos Using a Single Deep Learning Network [2.752817022620644]
The potential of using machine learning for image segmentation in total knee lies in its ability to improve segmentation accuracy, automate the process, and provide real-time assistance to surgeons.
This paper proposes a methodology to use deep learning for robust real-time total knee image segmentation.
The deep learning model, trained on a large dataset, demonstrates outstanding performance in accurately segmenting both the implanted femur and tibia.
arXiv Detail & Related papers (2024-01-23T05:00:02Z) - Visual-Kinematics Graph Learning for Procedure-agnostic Instrument Tip
Segmentation in Robotic Surgeries [29.201385352740555]
We propose a novel visual-kinematics graph learning framework to accurately segment the instrument tip given various surgical procedures.
Specifically, a graph learning framework is proposed to encode relational features of instrument parts from both image and kinematics.
A cross-modal contrastive loss is designed to incorporate robust geometric prior from kinematics to image for tip segmentation.
arXiv Detail & Related papers (2023-09-02T14:52:58Z) - Automatic registration with continuous pose updates for marker-less
surgical navigation in spine surgery [52.63271687382495]
We present an approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner.
A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models.
An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system.
arXiv Detail & Related papers (2023-08-05T16:26:41Z) - GLSFormer : Gated - Long, Short Sequence Transformer for Step
Recognition in Surgical Videos [57.93194315839009]
We propose a vision transformer-based approach to learn temporal features directly from sequence-level patches.
We extensively evaluate our approach on two cataract surgery video datasets, Cataract-101 and D99, and demonstrate superior performance compared to various state-of-the-art methods.
arXiv Detail & Related papers (2023-07-20T17:57:04Z) - Neural LerPlane Representations for Fast 4D Reconstruction of Deformable
Tissues [52.886545681833596]
LerPlane is a novel method for fast and accurate reconstruction of surgical scenes under a single-viewpoint setting.
LerPlane treats surgical procedures as 4D volumes and factorizes them into explicit 2D planes of static and dynamic fields.
LerPlane shares static fields, significantly reducing the workload of dynamic tissue modeling.
arXiv Detail & Related papers (2023-05-31T14:38:35Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - iPhantom: a framework for automated creation of individualized
computational phantoms and its application to CT organ dosimetry [58.943644554192936]
This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or digital-twins.
The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients.
iPhantom precisely predicted all organ locations with good accuracy of Dice Similarity Coefficients (DSC) >0.6 for anchor organs and DSC of 0.3-0.9 for all other organs.
arXiv Detail & Related papers (2020-08-20T01:50:49Z) - Deep Negative Volume Segmentation [60.44793799306154]
We propose a new angle to the 3D segmentation task: segment empty spaces between all the tissues surrounding the object.
Our approach is an end-to-end pipeline that comprises a V-Net for bone segmentation.
We validate the idea on the CT scans in a 50-patient dataset, annotated by experts in maxillofacial medicine.
arXiv Detail & Related papers (2020-06-22T16:55:23Z) - Spatiotemporal-Aware Augmented Reality: Redefining HCI in Image-Guided
Therapy [39.370739217840594]
Augmented reality (AR) has been introduced in the operating rooms in the last decade.
This paper shows how exemplary visualization are redefined by taking full advantage of head-mounted displays.
The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the redefinition of different human-machine interfaces.
arXiv Detail & Related papers (2020-03-04T18:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.