A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
Publications:
Vinay Bettadapura, Caroline Pantofaru, Irfan Essa
Leveraging Contextual Cues for Generating Basketball Highlights Proceedings Article
In: ACM International Conference on Multimedia (ACM-MM), ACM 2016.
Abstract | Links | BibTeX | Tags: ACM, ACMMM, activity recognition, computational video, computer vision, sports visualization, video summarization
@inproceedings{2016-Bettadapura-LCCGBH,
title = {Leveraging Contextual Cues for Generating Basketball Highlights},
author = {Vinay Bettadapura and Caroline Pantofaru and Irfan Essa},
url = {https://dl.acm.org/doi/10.1145/2964284.2964286
http://www.vbettadapura.com/highlights/basketball/index.htm},
doi = {10.1145/2964284.2964286},
year = {2016},
date = {2016-10-01},
urldate = {2016-10-01},
booktitle = {ACM International Conference on Multimedia (ACM-MM)},
organization = {ACM},
abstract = {The massive growth of sports videos has resulted in a need for automatic generation of sports highlights that are comparable in quality to the hand-edited highlights produced by broadcasters such as ESPN. Unlike previous works that mostly use audio-visual cues derived from the video, we propose an approach that additionally leverages contextual cues derived from the environment that the game is being played in. The contextual cues provide information about the excitement levels in the game, which can be ranked and selected to automatically produce high-quality basketball highlights. We introduce a new dataset of 25 NCAA games along with their play-by-play stats and the ground-truth excitement data for each basket. We explore the informativeness of five different cues derived from the video and from the environment through user studies. Our experiments show that for our study participants, the highlights produced by our system are comparable to the ones produced by ESPN for the same games.},
keywords = {ACM, ACMMM, activity recognition, computational video, computer vision, sports visualization, video summarization},
pubstate = {published},
tppubtype = {inproceedings}
}
Edison Thomaz, Cheng Zhang, Irfan Essa, Gregory Abowd
Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study Best Paper Proceedings Article
In: ACM Conference on Intelligence User Interfaces (IUI), 2015.
Abstract | Links | BibTeX | Tags: ACM, activity recognition, AI, awards, behavioral imaging, best paper award, computational health, IUI, machine learning
@inproceedings{2015-Thomaz-IMEARWSFASFS,
title = {Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study},
author = {Edison Thomaz and Cheng Zhang and Irfan Essa and Gregory Abowd},
url = {https://dl.acm.org/doi/10.1145/2678025.2701405},
doi = {10.1145/2678025.2701405},
year = {2015},
date = {2015-05-01},
urldate = {2015-05-01},
booktitle = {ACM Conference on Intelligence User Interfaces (IUI)},
abstract = {Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.},
keywords = {ACM, activity recognition, AI, awards, behavioral imaging, best paper award, computational health, IUI, machine learning},
pubstate = {published},
tppubtype = {inproceedings}
}
V. Kwatra, I. Essa, A. Bobick, N. Kwatra
Texture Optimization for Example-based Synthesis Journal Article
In: ACM SIGGRAPH Proceedings of Annual Conference on Computer graphics and interactive techniques, vol. 24, no. 3, pp. 795–802, 2005.
Abstract | Links | BibTeX | Tags: ACM, computational video, computer animation, computer graphics, computer vision, SIGGRAPH
@article{2005-Kwatra-TOES,
title = {Texture Optimization for Example-based Synthesis},
author = {V. Kwatra and I. Essa and A. Bobick and N. Kwatra},
url = {https://dl.acm.org/doi/10.1145/1186822.1073263
https://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/
https://youtu.be/Ys_U46-FeEM
http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TextureOptimization_DVD.mov
http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TO-sig05.ppt
http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TO-final.pdf
},
doi = {10.1145/1073204.1073263},
year = {2005},
date = {2005-08-01},
urldate = {2005-08-01},
journal = {ACM SIGGRAPH Proceedings of Annual Conference on Computer graphics and interactive techniques},
volume = {24},
number = {3},
pages = {795--802},
abstract = {We present a novel technique for texture synthesis using optimization. We define a Markov Random Field (MRF)-based similarity metric for measuring the quality of synthesized texture concerning a given input sample. This allows us to formulate the synthesis problem as the minimization of an energy function, which is optimized using an Expectation Maximization (EM)-like algorithm. In contrast to most example-based techniques that do region-growing, ours is a joint optimization approach that progressively refines the entire texture. Additionally, our approach is ideally suited to allow for the controllable synthesis of textures. Specifically, we demonstrate controllability by animating image textures using flow fields. We allow for general two-dimensional flow fields that may dynamically change over time. Applications of this technique include dynamic texturing of fluid animations and texture-based flow visualization.},
keywords = {ACM, computational video, computer animation, computer graphics, computer vision, SIGGRAPH},
pubstate = {published},
tppubtype = {article}
}
- https://dl.acm.org/doi/10.1145/1186822.1073263
- https://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/
- https://youtu.be/Ys_U46-FeEM
- http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TextureOpti[...]
- http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TO-sig05.pp[...]
- http://www.cc.gatech.edu/gvu/perception/projects/textureoptimization/TO-final.pd[...]
- doi:10.1145/1073204.1073263
A. Pentland, I. Essa, M. Friedmann, B. Horowitz, S. E. Sclaroff
The Thingworld Modeling System: Virtual Sculpting by Modal Forces Journal Article
In: ACM SIGGRAPH Proceedings of Symposium on Interactive 3D Graphics (I3DG), vol. 24, no. 2, pp. 143–144, 1990.
Links | BibTeX | Tags: ACM, computer animation, computer graphics, I3DG, physcially-based modeling, SIGGRAPH
@article{1990-Pentland-TMSVSMF,
title = {The Thingworld Modeling System: Virtual Sculpting by Modal Forces},
author = {A. Pentland and I. Essa and M. Friedmann and B. Horowitz and S. E. Sclaroff},
doi = {10.1145/91394.91434},
year = {1990},
date = {1990-03-01},
urldate = {1990-03-01},
journal = {ACM SIGGRAPH Proceedings of Symposium on Interactive 3D Graphics (I3DG)},
volume = {24},
number = {2},
pages = {143--144},
keywords = {ACM, computer animation, computer graphics, I3DG, physcially-based modeling, SIGGRAPH},
pubstate = {published},
tppubtype = {article}
}
Other Publication Sites
A few more sites that aggregate research publications: Academic.edu, Bibsonomy, CiteULike, Mendeley.
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.