A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
Publications:
Aneeq Zia, Liheng Guo, Linlin Zhou, Irfan Essa, Anthony Jarc
Novel evaluation of surgical activity recognition models using task-based efficiency metrics Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, 2019.
Abstract | Links | BibTeX | Tags: activity assessment, activity recognition, surgical training
@article{2019-Zia-NESARMUTEM,
title = {Novel evaluation of surgical activity recognition models using task-based efficiency metrics},
author = {Aneeq Zia and Liheng Guo and Linlin Zhou and Irfan Essa and Anthony Jarc},
url = {https://www.ncbi.nlm.nih.gov/pubmed/31267333},
doi = {10.1007/s11548-019-02025-w},
year = {2019},
date = {2019-07-01},
urldate = {2019-07-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
abstract = {PURPOSE: Surgical task-based metrics (rather than entire
procedure metrics) can be used to improve surgeon training and,
ultimately, patient care through focused training interventions.
Machine learning models to automatically recognize individual
tasks or activities are needed to overcome the otherwise manual
effort of video review. Traditionally, these models have been
evaluated using frame-level accuracy. Here, we propose evaluating
surgical activity recognition models by their effect on
task-based efficiency metrics. In this way, we can determine when
models have achieved adequate performance for providing surgeon
feedback via metrics from individual tasks. METHODS: We propose a
new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of
robotic-assisted radical prostatectomies (RARP). We evaluated our
model both in terms of conventional methods (e.g., Jaccard Index,
task boundary accuracy) as well as novel ways, such as the
accuracy of efficiency metrics computed from instrument movements
and system events. RESULTS: Our proposed model achieves a Jaccard
Index of 0.85 thereby outperforming previous models on RARP.
Additionally, we show that metrics computed from tasks
automatically identified using RP-Net-V2 correlate well with
metrics from tasks labeled by clinical experts. CONCLUSION: We
demonstrate that metrics-based evaluation of surgical activity
recognition models is a viable approach to determine when models
can be used to quantify surgical efficiencies. We believe this
approach and our results illustrate the potential for fully
automated, postoperative efficiency reports.},
keywords = {activity assessment, activity recognition, surgical training},
pubstate = {published},
tppubtype = {article}
}
procedure metrics) can be used to improve surgeon training and,
ultimately, patient care through focused training interventions.
Machine learning models to automatically recognize individual
tasks or activities are needed to overcome the otherwise manual
effort of video review. Traditionally, these models have been
evaluated using frame-level accuracy. Here, we propose evaluating
surgical activity recognition models by their effect on
task-based efficiency metrics. In this way, we can determine when
models have achieved adequate performance for providing surgeon
feedback via metrics from individual tasks. METHODS: We propose a
new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of
robotic-assisted radical prostatectomies (RARP). We evaluated our
model both in terms of conventional methods (e.g., Jaccard Index,
task boundary accuracy) as well as novel ways, such as the
accuracy of efficiency metrics computed from instrument movements
and system events. RESULTS: Our proposed model achieves a Jaccard
Index of 0.85 thereby outperforming previous models on RARP.
Additionally, we show that metrics computed from tasks
automatically identified using RP-Net-V2 correlate well with
metrics from tasks labeled by clinical experts. CONCLUSION: We
demonstrate that metrics-based evaluation of surgical activity
recognition models is a viable approach to determine when models
can be used to quantify surgical efficiencies. We believe this
approach and our results illustrate the potential for fully
automated, postoperative efficiency reports.
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric L Sarin, Irfan Essa
Video and accelerometer-based motion analysis for automated surgical skills assessment Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, vol. 13, no. 3, pp. 443–455, 2018.
Links | BibTeX | Tags: activity assessment, activity recognition, IJCARS, surgical training
@article{2018-Zia-VAMAASSA,
title = {Video and accelerometer-based motion analysis for automated surgical skills assessment},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric L Sarin and Irfan Essa},
url = {https://link.springer.com/article/10.1007/s11548-018-1704-z},
doi = {10.1007/s11548-018-1704-z},
year = {2018},
date = {2018-01-01},
urldate = {2018-01-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
volume = {13},
number = {3},
pages = {443--455},
publisher = {Springer},
keywords = {activity assessment, activity recognition, IJCARS, surgical training},
pubstate = {published},
tppubtype = {article}
}
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric Sarin, Irfan Essa
Video and Accelerometer-Based Motion Analysis for Automated Surgical Skills Assessment Proceedings Article
In: Information Processing in Computer-Assisted Interventions (IPCAI), 2017.
BibTeX | Tags: activity assessment, activity recognition, surgical training
@inproceedings{2017-Zia-VAMAASSA,
title = {Video and Accelerometer-Based Motion Analysis for Automated Surgical Skills Assessment},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric Sarin and Irfan Essa},
year = {2017},
date = {2017-06-01},
urldate = {2017-06-01},
booktitle = {Information Processing in Computer-Assisted Interventions (IPCAI)},
keywords = {activity assessment, activity recognition, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Aneeq Zia, Daniel Castro, Irfan Essa
Fine-tuning Deep Architectures for Surgical Tool Detection Proceedings Article
In: Workshop and Challenges on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), Held in Conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Athens, Greece, 2016.
Abstract | Links | BibTeX | Tags: activity assessment, computer vision, MICCAI, surgical training
@inproceedings{2016-Zia-FDASTD,
title = {Fine-tuning Deep Architectures for Surgical Tool Detection},
author = {Aneeq Zia and Daniel Castro and Irfan Essa},
url = {http://www.cc.gatech.edu/cpl/projects/deepm2cai/
https://www.cc.gatech.edu/cpl/projects/deepm2cai/paper.pdf},
year = {2016},
date = {2016-10-01},
urldate = {2016-10-01},
booktitle = {Workshop and Challenges on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), Held in Conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)},
address = {Athens, Greece},
abstract = {Understanding surgical workflow has been a key concern of the medical research community. One of the main advantages of surgical workflow detection is real time operating room (OR) scheduling. For hospitals, each minute of OR time is important in order to reduce cost and increase patient throughput. Traditional approaches in this field generally tackle the video analysis using hand crafted video features to facilitate the tool detection. Recently, Twinanda et al presented a CNN architecture ’EndoNet’ which outperformed previous methods for both surgical tool detection and surgical phase detection. Given the recent success of these networks, we present a study of various architectures coupled with a submission to the M2CAI Surgical Tool Detection challenge. We achieved a top-3 result for the M2CAI competition with a mAP of 37.6.
},
keywords = {activity assessment, computer vision, MICCAI, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric Sarin, Thomas Ploetz, Mark Clements, Irfan Essa
Automated video-based assessment of surgical skills for training and evaluation in medical schools Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 9, pp. 1623–1636, 2016.
Abstract | Links | BibTeX | Tags: activity assessment, computational health, IJCARS, surgical training
@article{2016-Zia-AVASSTEMS,
title = {Automated video-based assessment of surgical skills for training and evaluation in medical schools},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric Sarin and Thomas Ploetz and Mark Clements and Irfan Essa},
url = {http://link.springer.com/article/10.1007/s11548-016-1468-2
https://pubmed.ncbi.nlm.nih.gov/27567917/},
doi = {10.1007/s11548-016-1468-2},
year = {2016},
date = {2016-09-01},
urldate = {2016-09-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
volume = {11},
number = {9},
pages = {1623--1636},
publisher = {Springer Berlin Heidelberg},
abstract = {Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities.
},
keywords = {activity assessment, computational health, IJCARS, surgical training},
pubstate = {published},
tppubtype = {article}
}
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric Sarin, Mark Clements, Irfan Essa
Automated Assessment of Surgical Skills Using Frequency Analysis Proceedings Article
In: International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI), 2015.
Abstract | Links | BibTeX | Tags: activity assessment, computational health, surgical training
@inproceedings{2015-Zia-AASSUFA,
title = {Automated Assessment of Surgical Skills Using Frequency Analysis},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric Sarin and Mark Clements and Irfan Essa},
url = {https://link.springer.com/chapter/10.1007/978-3-319-24553-9_53
https://rdcu.be/c7CEF},
doi = {10.1007/978-3-319-24553-9_53},
year = {2015},
date = {2015-10-01},
urldate = {2015-10-01},
booktitle = {International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI)},
abstract = {We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.
},
keywords = {activity assessment, computational health, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Yachna Sharma, Vinay Bettadapura, Thomas Ploetz, Nils Hammerla, Sebastian Mellor, Roisin McNaney, Patrick Olivier, Sandeep Deshmukh, Andrew Mccaskie, Irfan Essa
Video Based Assessment of OSATS Using Sequential Motion Textures Best Paper Proceedings Article
In: Proceedings of Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), 2014.
Abstract | Links | BibTeX | Tags: activity assessment, awards, best paper award, computer vision, medical imaging, surgical training
@inproceedings{2014-Sharma-VBAOUSMT,
title = {Video Based Assessment of OSATS Using Sequential Motion Textures},
author = {Yachna Sharma and Vinay Bettadapura and Thomas Ploetz and Nils Hammerla and Sebastian Mellor and Roisin McNaney and Patrick Olivier and Sandeep Deshmukh and Andrew Mccaskie and Irfan Essa},
url = {https://smartech.gatech.edu/bitstream/handle/1853/53651/2014-Sharma-VBAOUSMT.pdf
https://www.semanticscholar.org/paper/Video-Based-Assessment-of-OSATS-Using-Sequential-Sharma-Bettadapura/1dde770faa24d4e04306ca6fb85e76dc78876c49},
year = {2014},
date = {2014-09-01},
urldate = {2014-09-01},
booktitle = {Proceedings of Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI)},
abstract = {A fully automated framework for video-based surgical skill assessment is presented that incorporates the sequential and qualitative aspects of surgical motion in a data-driven manner. The Objective Structured Assessment of Technical Skills (OSATS) assessments is replicated, which provides both an overall and in-detail evaluation of basic suturing skills required for surgeons. Video analysis techniques are introduced that incorporate sequential motion aspects into motion textures. Significant performance improvement over standard bag-of-words and motion analysis approaches is demonstrated. The framework is evaluated in a case study that involved medical students with varying levels of expertise performing basic surgical tasks in a surgical training lab setting.
},
keywords = {activity assessment, awards, best paper award, computer vision, medical imaging, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Eric Sarin, Kihwan Kim, Irfan Essa, William Cooper
3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education Proceedings Article
In: Proceedings of Society of Thoracic Surgeons Annual Meeting, Society of Thoracic Surgeons, 2011.
BibTeX | Tags: computational health, computer vision, intelligent environments, surgical training
@inproceedings{2011-Sarin-3VORUAMCNPESSE,
title = {3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education},
author = {Eric Sarin and Kihwan Kim and Irfan Essa and William Cooper},
year = {2011},
date = {2011-01-01},
urldate = {2011-01-01},
booktitle = {Proceedings of Society of Thoracic Surgeons Annual Meeting},
publisher = {Society of Thoracic Surgeons},
keywords = {computational health, computer vision, intelligent environments, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Other Publication Sites
A few more sites that aggregate research publications: Academic.edu, Bibsonomy, CiteULike, Mendeley.
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.