A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
Publications:
Zoher Ghogawala, Melissa Dunbar, Irfan Essa
Artificial Intelligence for the Treatment of Lumbar Spondylolisthesis Journal Article
In: Neurosurgery Clinics of North America, vol. 30, no. 3, pp. 383 - 389, 2019, ISSN: 1042-3680, (Lumbar Spondylolisthesis).
Abstract | Links | BibTeX | Tags: AI, computational health, Predictive analytics
@article{2019-Ghogawala-AITLS,
title = {Artificial Intelligence for the Treatment of Lumbar Spondylolisthesis},
author = {Zoher Ghogawala and Melissa Dunbar and Irfan Essa},
url = {http://www.sciencedirect.com/science/article/pii/S1042368019300257
https://pubmed.ncbi.nlm.nih.gov/31078239/},
doi = {10.1016/j.nec.2019.02.012},
issn = {1042-3680},
year = {2019},
date = {2019-07-01},
urldate = {2019-07-01},
journal = {Neurosurgery Clinics of North America},
volume = {30},
number = {3},
pages = {383 - 389},
abstract = {Multiple registries are currently collecting patient-specific data on lumbar spondylolisthesis including outcomes data. The collection of imaging diagnostics data along with comparative outcomes data following decompression versus decompression and fusion treatments for degenerative spondylolisthesis represents an enormous opportunity for modern machine-learning analytics research.
},
note = {Lumbar Spondylolisthesis},
keywords = {AI, computational health, Predictive analytics},
pubstate = {published},
tppubtype = {article}
}
Zoher Ghogawala, Melissa Dunbar, Irfan Essa
Lumbar spondylolisthesis: modern registries and the development of artificial intelligence Journal Article
In: Journal of Neurosurgery: Spine (JNSPG 75th Anniversary Invited Review Article), vol. 30, no. 6, pp. 729-735, 2019.
Links | BibTeX | Tags: AI, computational health, Predictive analytics
@article{2019-Ghogawala-LSMRDAI,
title = {Lumbar spondylolisthesis: modern registries and the development of artificial intelligence},
author = {Zoher Ghogawala and Melissa Dunbar and Irfan Essa},
doi = {10.3171/2019.2.SPINE18751},
year = {2019},
date = {2019-06-01},
urldate = {2019-06-01},
journal = {Journal of Neurosurgery: Spine (JNSPG 75th Anniversary Invited Review Article)},
volume = {30},
number = {6},
pages = {729-735},
keywords = {AI, computational health, Predictive analytics},
pubstate = {published},
tppubtype = {article}
}
Edison Thomaz, Irfan Essa, Gregory Abowd
Challenges and Opportunities in Automated Detection of Eating Activity Proceedings Article
In: Mobile Health, pp. 151–174, Springer, 2017.
Abstract | Links | BibTeX | Tags: activity recognition, computational health, ubiquitous computing
@inproceedings{2017-Thomaz-COADEA,
title = {Challenges and Opportunities in Automated Detection of Eating Activity},
author = {Edison Thomaz and Irfan Essa and Gregory Abowd},
url = {https://link.springer.com/chapter/10.1007/978-3-319-51394-2_9},
doi = {10.1007/978-3-319-51394-2_9},
year = {2017},
date = {2017-01-01},
urldate = {2017-01-01},
booktitle = {Mobile Health},
pages = {151--174},
publisher = {Springer},
abstract = {Motivated by applications in nutritional epidemiology and food journaling, computing researchers have proposed numerous techniques for automating dietary monitoring over the years. Although progress has been made, a truly practical system that can automatically recognize what people eat in real-world settings remains elusive. Eating detection is a foundational element of automated dietary monitoring (ADM) since automatically recognizing when a person is eating is required before identifying what and how much is being consumed. Additionally, eating detection can serve as the basis for new types of dietary self-monitoring practices such as semi-automated food journaling.This chapter discusses the problem of automated eating detection and presents a variety of practical techniques for detecting eating activities in real-world settings. These techniques center on three sensing modalities: first-person images taken with wearable cameras, ambient sounds, and on-body inertial sensors [34–37]. The chapter begins with an analysis of how first-person images reflecting everyday experiences can be used to identify eating moments using two approaches: human computation and convolutional neural networks. Next, we present an analysis showing how certain sounds associated with eating can be recognized and used to infer eating activities. Finally, we introduce a method for detecting eating moments with on-body inertial sensors placed on the wrist.
},
keywords = {activity recognition, computational health, ubiquitous computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric Sarin, Thomas Ploetz, Mark Clements, Irfan Essa
Automated video-based assessment of surgical skills for training and evaluation in medical schools Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 9, pp. 1623–1636, 2016.
Abstract | Links | BibTeX | Tags: activity assessment, computational health, IJCARS, surgical training
@article{2016-Zia-AVASSTEMS,
title = {Automated video-based assessment of surgical skills for training and evaluation in medical schools},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric Sarin and Thomas Ploetz and Mark Clements and Irfan Essa},
url = {http://link.springer.com/article/10.1007/s11548-016-1468-2
https://pubmed.ncbi.nlm.nih.gov/27567917/},
doi = {10.1007/s11548-016-1468-2},
year = {2016},
date = {2016-09-01},
urldate = {2016-09-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
volume = {11},
number = {9},
pages = {1623--1636},
publisher = {Springer Berlin Heidelberg},
abstract = {Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities.
},
keywords = {activity assessment, computational health, IJCARS, surgical training},
pubstate = {published},
tppubtype = {article}
}
Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric Sarin, Mark Clements, Irfan Essa
Automated Assessment of Surgical Skills Using Frequency Analysis Proceedings Article
In: International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI), 2015.
Abstract | Links | BibTeX | Tags: activity assessment, computational health, surgical training
@inproceedings{2015-Zia-AASSUFA,
title = {Automated Assessment of Surgical Skills Using Frequency Analysis},
author = {Aneeq Zia and Yachna Sharma and Vinay Bettadapura and Eric Sarin and Mark Clements and Irfan Essa},
url = {https://link.springer.com/chapter/10.1007/978-3-319-24553-9_53
https://rdcu.be/c7CEF},
doi = {10.1007/978-3-319-24553-9_53},
year = {2015},
date = {2015-10-01},
urldate = {2015-10-01},
booktitle = {International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI)},
abstract = {We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.
},
keywords = {activity assessment, computational health, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Irfan Essa, Gregory Abowd
A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing Proceedings Article
In: ACM International Conference on Ubiquitous Computing (UBICOMP), 2015.
Abstract | Links | BibTeX | Tags: activity recognition, computational health, machine learning, Ubicomp, ubiquitous computing
@inproceedings{2015-Thomaz-PAREMWWIS,
title = {A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing},
author = {Edison Thomaz and Irfan Essa and Gregory Abowd},
url = {https://dl.acm.org/doi/10.1145/2750858.2807545},
doi = {10.1145/2750858.2807545},
year = {2015},
date = {2015-09-01},
urldate = {2015-09-01},
booktitle = {ACM International Conference on Ubiquitous Computing (UBICOMP)},
abstract = {Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.
},
keywords = {activity recognition, computational health, machine learning, Ubicomp, ubiquitous computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Cheng Zhang, Irfan Essa, Gregory Abowd
Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study Best Paper Proceedings Article
In: ACM Conference on Intelligence User Interfaces (IUI), 2015.
Abstract | Links | BibTeX | Tags: ACM, activity recognition, AI, awards, behavioral imaging, best paper award, computational health, IUI, machine learning
@inproceedings{2015-Thomaz-IMEARWSFASFS,
title = {Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study},
author = {Edison Thomaz and Cheng Zhang and Irfan Essa and Gregory Abowd},
url = {https://dl.acm.org/doi/10.1145/2678025.2701405},
doi = {10.1145/2678025.2701405},
year = {2015},
date = {2015-05-01},
urldate = {2015-05-01},
booktitle = {ACM Conference on Intelligence User Interfaces (IUI)},
abstract = {Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.},
keywords = {ACM, activity recognition, AI, awards, behavioral imaging, best paper award, computational health, IUI, machine learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Jonathan Bidwell, Agata Rozga, J. Kim, H. Rao, Mark Clements, Irfan Essa, Gregory Abowd
Automated Prediction of a Child's Response to Name from Audio and Video Proceedings Article
In: Proceedings of Annual Conference of the International Society of Autism Research, IMFAR 2014.
Abstract | Links | BibTeX | Tags: autism, behavioral imaging, computational health
@inproceedings{2014-Bidwell-APCRNFAV,
title = {Automated Prediction of a Child's Response to Name from Audio and Video},
author = {Jonathan Bidwell and Agata Rozga and J. Kim and H. Rao and Mark Clements and Irfan Essa and Gregory Abowd},
url = {https://imfar.confex.com/imfar/2014/webprogram/Paper16999.html
https://www.researchgate.net/publication/268143304_Automated_Prediction_of_a_Child's_Response_to_Name_from_Audio_and_Video},
year = {2014},
date = {2014-05-01},
urldate = {2014-05-01},
booktitle = {Proceedings of Annual Conference of the International Society of Autism Research},
organization = {IMFAR},
abstract = {Evidence has shown that a child’s failure to respond to name is an early warning sign for autism and is measured as a part of standard assessments e.g. ADOS [1,2]. Objectives: Build a fully automated system for measuring a child’s response to his or her name being called given video and recorded audio during a social interaction. Here our initial goal is to enable this measurement in a naturalistic setting with the long term goal of eventually obtaining finer gain behavior measurements such as child response time latency between a name call and a response. Methods: We recorded 40 social interactions between an examiner and children (ages 15-24 months). 6 of our 40 child participants showed signs of developmental delay based on standardized parent report measures (M-CHAT, CSBS-ITC, CBCL language development survey). The child sat at a table with a toy to play with. The examiner wore a lapel microphone and called the child’s name up to 3 times while standing to the right and slightly behind the child. These interactions were recorded with two cameras that we used in conjunction with the examiner’s audio for predicting when the child responded. Name calls were measured by 1) detecting when an examiner called the child’s name and 2) evaluating whether the child turned to make eye contact with the examiner. Examiner name calls were detected using a speech detection algorithm. Meanwhile the child’s head turns were tracked using a pair of cameras which consisted of overhead Kinect color and depth camera and a front facing color camera. These speech and head turn measurements were used to train a binary classifier for automatically predicting if and when a child responds to his or her name being called. The result is a system for predicting the child’s response to his or her name being called automatically recorded audio and video of the session. Results: The system was evaluated against human coding of the child’s response to name from video. If the automated prediction fell within +/- 1 second of the human coded response then we recorded a match. Across our 40 sessions we had 56 name calls, 35 responses and 5 children that did not respond to name. Our software correctly predicted children’s response to name with a precision of 90%, recall of 85%.},
keywords = {autism, behavioral imaging, computational health},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Aman Parnami, Jonathan Bidwell, Irfan Essa, Gregory Abowd
Technological Approaches for Addressing Privacy Concerns when Recognizing Eating Behaviors with Wearable Cameras. Proceedings Article
In: ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP), 2013.
Links | BibTeX | Tags: activity recognition, computational health, privacy, Ubicomp, ubiquitous computing
@inproceedings{2013-Thomaz-TAAPCWREBWWC,
title = {Technological Approaches for Addressing Privacy Concerns when Recognizing Eating Behaviors with Wearable Cameras.},
author = {Edison Thomaz and Aman Parnami and Jonathan Bidwell and Irfan Essa and Gregory Abowd},
doi = {10.1145/2493432.2493509},
year = {2013},
date = {2013-09-01},
urldate = {2013-09-01},
booktitle = {ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP)},
keywords = {activity recognition, computational health, privacy, Ubicomp, ubiquitous computing},
pubstate = {published},
tppubtype = {inproceedings}
}
James Rehg, Gregory Abowd, Agata Rozga, Mario Romero, Mark Clements, Stan Sclaroff, Irfan Essa, Opal Ousley, Yin Li, Chanho Kim, Hrishikesh Rao, Jonathan Kim, Liliana Lo Presti, Jianming Zhang, Denis Lantsman, Jonathan Bidwell, Zhefan Ye
Decoding Children's Social Behavior Proceedings Article
In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society 2013, ISBN: 1063-6919.
Abstract | Links | BibTeX | Tags: autism, behavioral imaging, computational health, computer vision, CVPR
@inproceedings{2013-Rehg-DCSB,
title = {Decoding Children's Social Behavior},
author = {James Rehg and Gregory Abowd and Agata Rozga and Mario Romero and Mark Clements and Stan Sclaroff and Irfan Essa and Opal Ousley and Yin Li and Chanho Kim and Hrishikesh Rao and Jonathan Kim and Liliana Lo Presti and Jianming Zhang and Denis Lantsman and Jonathan Bidwell and Zhefan Ye},
url = {https://ieeexplore.ieee.org/document/6619282
http://www.cbi.gatech.edu/mmdb/
},
doi = {10.1109/CVPR.2013.438},
isbn = {1063-6919},
year = {2013},
date = {2013-06-01},
urldate = {2013-06-01},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
organization = {IEEE Computer Society},
abstract = {We introduce a new problem domain for activity recognition: the analysis of children's social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.
},
keywords = {autism, behavioral imaging, computational health, computer vision, CVPR},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Aman Parnami, Irfan Essa, Gregory Abowd
Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation Proceedings Article
In: Proceedings of ACM International SenseCam and Pervasive Imaging (SenseCam '13), 2013.
Links | BibTeX | Tags: activity recognition, behavioral imaging, computational health, ubiquitous computing, wearable computing
@inproceedings{2013-Thomaz-FIEMFFILHC,
title = {Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation},
author = {Edison Thomaz and Aman Parnami and Irfan Essa and Gregory Abowd},
doi = {10.1145/2526667.2526672},
year = {2013},
date = {2013-01-01},
urldate = {2013-01-01},
booktitle = {Proceedings of ACM International SenseCam and Pervasive Imaging (SenseCam '13)},
keywords = {activity recognition, behavioral imaging, computational health, ubiquitous computing, wearable computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Thoma Pleotz, Irfan Essa, Gregory Abowd
Interactive Techniques for Labeling Activities Of Daily Living to Assist Machine Learning Proceedings Article
In: Proceedings of Workshop on Interactive Systems in Healthcare, 2011.
Abstract | Links | BibTeX | Tags: activity recognition, behavioral imaging, computational health, wearable computing
@inproceedings{2011-Thomaz-ITLADLAML,
title = {Interactive Techniques for Labeling Activities Of Daily Living to Assist Machine Learning},
author = {Edison Thomaz and Thoma Pleotz and Irfan Essa and Gregory Abowd},
url = {https://wish2011.wordpress.com/accepted-papers/
https://users.ece.utexas.edu/~ethomaz/papers/w1.pdf},
year = {2011},
date = {2011-11-01},
urldate = {2011-11-01},
booktitle = {Proceedings of Workshop on Interactive Systems in Healthcare},
abstract = {Over the next decade, as healthcare continues its march away from the hospital and towards the home, logging and making sense of activities of daily living will play a key role in health modeling and life-long home care. Machine learning research has explored ways to automate the detection and quantification of these activities in sensor-rich environments. While we continue to make progress in developing practical and cost-effective activity sensing techniques, one large hurdle remains, obtaining labeled activity data to train activity recognition systems. In this paper, we discuss the process of gathering ground truth data with human participation for health modeling applications. In particular, we propose a criterion and design space containing five dimensions that we have identified as central to the characterization and evaluation of interactive labeling methods.
},
keywords = {activity recognition, behavioral imaging, computational health, wearable computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Eric Sarin, Kihwan Kim, Irfan Essa, William Cooper
3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education Proceedings Article
In: Proceedings of Society of Thoracic Surgeons Annual Meeting, Society of Thoracic Surgeons, 2011.
BibTeX | Tags: computational health, computer vision, intelligent environments, surgical training
@inproceedings{2011-Sarin-3VORUAMCNPESSE,
title = {3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education},
author = {Eric Sarin and Kihwan Kim and Irfan Essa and William Cooper},
year = {2011},
date = {2011-01-01},
urldate = {2011-01-01},
booktitle = {Proceedings of Society of Thoracic Surgeons Annual Meeting},
publisher = {Society of Thoracic Surgeons},
keywords = {computational health, computer vision, intelligent environments, surgical training},
pubstate = {published},
tppubtype = {inproceedings}
}
Irfan Essa, Gregory Abowd, Aaron Bobick, Elizabeth Mynatt, Wendy Rogers
Building and Aware Home: Technologies for the way we may live Proceedings Article
In: Proceedings of First International Workshop on Man-Machine Symbiosis, Kyoto, Japan, 2002.
BibTeX | Tags: aging-in-place, computational health, human-computer interaction
@inproceedings{2002-Essa-BAHTL,
title = {Building and Aware Home: Technologies for the way we may live},
author = {Irfan Essa and Gregory Abowd and Aaron Bobick and Elizabeth Mynatt and Wendy Rogers},
year = {2002},
date = {2002-01-01},
urldate = {2002-01-01},
booktitle = {Proceedings of First International Workshop on Man-Machine Symbiosis},
address = {Kyoto, Japan},
keywords = {aging-in-place, computational health, human-computer interaction},
pubstate = {published},
tppubtype = {inproceedings}
}
Gregory Abowd, Chris Atkeson, Aaron Bobick, Irfan Essa, Blair MacIntyre, Elizabeth Mynatt, Thad Starner
Living laboratories: the future computing environments group at the Georgia Institute of Technology Proceedings Article
In: ACM CHI Conference on Human factors in Computing Systems, pp. 215–216, ACM Press, New York, NY, USA, 2000.
Links | BibTeX | Tags: aging-in-place, CHI, computational health, intelligent environments
@inproceedings{2000-Abowd-LLFCEGGIT,
title = {Living laboratories: the future computing environments group at the Georgia Institute of Technology},
author = {Gregory Abowd and Chris Atkeson and Aaron Bobick and Irfan Essa and Blair MacIntyre and Elizabeth Mynatt and Thad Starner},
doi = {10.1145/633292.633416},
year = {2000},
date = {2000-04-01},
urldate = {2000-04-01},
booktitle = {ACM CHI Conference on Human factors in Computing Systems},
pages = {215--216},
publisher = {ACM Press},
address = {New York, NY, USA},
keywords = {aging-in-place, CHI, computational health, intelligent environments},
pubstate = {published},
tppubtype = {inproceedings}
}
Cory Kidd, Rob Orr, Gregory Abowd, Chris Atkeson, Irfan Essa, Blair MacIntyre, Elizabeth Mynatt, Thad Starner, Wendy Newstetter
The Aware Home: A Living Laboratory for Ubiquitous Computing Research. Proceedings Article
In: Proceedings of Conference on Cooperative Buildings (CoBuild) [Cooperative Buildings. Integrating Information, Organizations and Architecture], pp. 191-198, Springer Berlin / Heidelberg Springer Berlin / Heidelberg, 1999.
Abstract | Links | BibTeX | Tags: aging-in-place, computational health, intelligent environments
@inproceedings{1999-Kidd-AHLLUCR,
title = {The Aware Home: A Living Laboratory for Ubiquitous Computing Research.},
author = {Cory Kidd and Rob Orr and Gregory Abowd and Chris Atkeson and Irfan Essa and Blair MacIntyre and Elizabeth Mynatt and Thad Starner and Wendy Newstetter},
url = {https://link.springer.com/chapter/10.1007/10705432_17
https://www.cc.gatech.edu/fce/ahri/publications/cobuild99_final.PDF},
doi = {10.1007/10705432_17},
year = {1999},
date = {1999-01-01},
urldate = {1999-01-01},
booktitle = {Proceedings of Conference on Cooperative Buildings (CoBuild) [Cooperative Buildings. Integrating Information, Organizations and Architecture]},
pages = {191-198},
publisher = {Springer Berlin / Heidelberg Springer Berlin / Heidelberg},
abstract = {We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.
},
keywords = {aging-in-place, computational health, intelligent environments},
pubstate = {published},
tppubtype = {inproceedings}
}
Irfan Essa, Gregory Abowd, Chris Atkeson
Computational Perception in Future Computing Environments Proceedings Article
In: Workshop on Perceptual User Interfaces (PUI), 1997.
Links | BibTeX | Tags: aging-in-place, computational health, intelligent environments
@inproceedings{1997-Essa-CPFCE,
title = {Computational Perception in Future Computing Environments},
author = {Irfan Essa and Gregory Abowd and Chris Atkeson},
url = {https://www.cc.gatech.edu/fce/pubs/pui97-fce.html},
year = {1997},
date = {1997-01-01},
urldate = {1997-01-01},
booktitle = {Workshop on Perceptual User Interfaces (PUI)},
keywords = {aging-in-place, computational health, intelligent environments},
pubstate = {published},
tppubtype = {inproceedings}
}
Other Publication Sites
A few more sites that aggregate research publications: Academic.edu, Bibsonomy, CiteULike, Mendeley.
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.