A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
Publications:
Peggy Chi, Tao Dong, Christian Frueh, Brian Colonna, Vivek Kwatra, Irfan Essa
Synthesis-Assisted Video Prototyping From a Document Proceedings Article
In: Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pp. 1–10, 2022.
Abstract | Links | BibTeX | Tags: computational video, generative media, google, human-computer interaction, UIST, video editing
@inproceedings{2022-Chi-SVPFD,
title = {Synthesis-Assisted Video Prototyping From a Document},
author = {Peggy Chi and Tao Dong and Christian Frueh and Brian Colonna and Vivek Kwatra and Irfan Essa},
url = {https://research.google/pubs/pub51631/
https://dl.acm.org/doi/abs/10.1145/3526113.3545676},
doi = {10.1145/3526113.3545676},
year = {2022},
date = {2022-10-01},
urldate = {2022-10-01},
booktitle = {Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology},
pages = {1--10},
abstract = {Video productions commonly start with a script, especially for talking head videos that feature a speaker narrating to the camera. When the source materials come from a written document -- such as a web tutorial, it takes iterations to refine content from a text article to a spoken dialogue, while considering visual compositions in each scene. We propose Doc2Video, a video prototyping approach that converts a document to interactive scripting with a preview of synthetic talking head videos. Our pipeline decomposes a source document into a series of scenes, each automatically creating a synthesized video of a virtual instructor. Designed for a specific domain -- programming cookbooks, we apply visual elements from the source document, such as a keyword, a code snippet or a screenshot, in suitable layouts. Users edit narration sentences, break or combine sections, and modify visuals to prototype a video in our Editing UI. We evaluated our pipeline with public programming cookbooks. Feedback from professional creators shows that our method provided a reasonable starting point to engage them in interactive scripting for a narrated instructional video.},
keywords = {computational video, generative media, google, human-computer interaction, UIST, video editing},
pubstate = {published},
tppubtype = {inproceedings}
}
Peggy Chi, Nathan Frey, Katrina Panovich, Irfan Essa
Automatic Instructional Video Creation from a Markdown-Formatted Tutorial Proceedings Article
In: ACM Symposium on User Interface Software and Technology (UIST), ACM Press, 2021.
Abstract | Links | BibTeX | Tags: google, human-computer interaction, UIST, video editting
@inproceedings{2021-Chi-AIVCFMT,
title = {Automatic Instructional Video Creation from a Markdown-Formatted Tutorial},
author = {Peggy Chi and Nathan Frey and Katrina Panovich and Irfan Essa},
url = {https://doi.org/10.1145/3472749.3474778
https://research.google/pubs/pub50745/
https://youtu.be/WmrZ7PUjyuM},
doi = {10.1145/3472749.3474778},
year = {2021},
date = {2021-10-01},
urldate = {2021-10-01},
booktitle = {ACM Symposium on User Interface Software and Technology (UIST)},
publisher = {ACM Press},
abstract = {We introduce HowToCut, an automatic approach that converts a Markdown-formatted tutorial into an interactive video that presents the visual instructions with a synthesized voiceover for narration. HowToCut extracts instructional content from a multimedia document that describes a step-by-step procedure. Our method selects and converts text instructions to a voiceover. It makes automatic editing decisions to align the narration with edited visual assets, including step images, videos, and text overlays. We derive our video editing strategies from an analysis of 125 web tutorials and apply Computer Vision techniques to the assets. To enable viewers to interactively navigate the tutorial, HowToCut's conversational UI presents instructions in multiple formats upon user commands. We evaluated our automatically-generated video tutorials through user studies (N=20) and validated the video quality via an online survey (N=93). The evaluation shows that our method was able to effectively create informative and useful instructional videos from a web tutorial document for both reviewing and following.},
keywords = {google, human-computer interaction, UIST, video editting},
pubstate = {published},
tppubtype = {inproceedings}
}
Peggy Chi, Zheng Sun, Katrina Panovich, Irfan Essa
Automatic Video Creation From a Web Page Proceedings Article
In: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, pp. 279–292, ACM CHI 2020.
Abstract | Links | BibTeX | Tags: computational video, google, human-computer interaction, UIST, video editing
@inproceedings{2020-Chi-AVCFP,
title = {Automatic Video Creation From a Web Page},
author = {Peggy Chi and Zheng Sun and Katrina Panovich and Irfan Essa},
url = {https://dl.acm.org/doi/abs/10.1145/3379337.3415814
https://research.google/pubs/pub49618/
https://ai.googleblog.com/2020/10/experimenting-with-automatic-video.html
https://www.youtube.com/watch?v=3yFYc-Wet8k},
doi = {10.1145/3379337.3415814},
year = {2020},
date = {2020-10-01},
urldate = {2020-10-01},
booktitle = {Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology},
pages = {279--292},
organization = {ACM CHI},
abstract = {Creating marketing videos from scratch can be challenging, especially when designing for multiple platforms with different viewing criteria. We present URL2Video, an automatic approach that converts a web page into a short video given temporal and visual constraints. URL2Video captures quality materials and design styles extracted from a web page, including fonts, colors, and layouts. Using constraint programming, URL2Video's design engine organizes the visual assets into a sequence of shots and renders to a video with user-specified aspect ratio and duration. Creators can review the video composition, modify constraints, and generate video variation through a user interface. We learned the design process from designers and compared our automatically generated results with their creation through interviews and an online survey. The evaluation shows that URL2Video effectively extracted design elements from a web page and supported designers by bootstrapping the video creation process.},
keywords = {computational video, google, human-computer interaction, UIST, video editing},
pubstate = {published},
tppubtype = {inproceedings}
}
Other Publication Sites
A few more sites that aggregate research publications: Academic.edu, Bibsonomy, CiteULike, Mendeley.
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.