A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
https://orcid.org/0000-0002-6236-2969
Publications:
1.
Daniel Nkemelu, Harshil Shah, Irfan Essa, Michael L. Best
Tackling Hate Speech in Low-resource Languages with Context Experts Proceedings Article
In: International Conference on Information & Communication Technologies and Development (ICTD), 2022.
@inproceedings{2022-Nkemelu-THSLLWCE,
title = {Tackling Hate Speech in Low-resource Languages with Context Experts},
author = {Daniel Nkemelu and Harshil Shah and Irfan Essa and Michael L. Best},
url = {https://www.nkemelu.com/data/ictd2022_nkemelu_final.pdf
},
year = {2022},
date = {2022-06-01},
urldate = {2022-06-01},
booktitle = {International Conference on Information & Communication Technologies and Development (ICTD)},
abstract = {Given Myanmar's historical and socio-political context, hate speech spread on social media have escalated into offline unrest and violence. This paper presents findings from our remote study on the automatic detection of hate speech online in Myanmar. We argue that effectively addressing this problem will require community-based approaches that combine the knowledge of context experts with machine learning tools that can analyze the vast amount of data produced. To this end, we develop a systematic process to facilitate this collaboration covering key aspects of data collection, annotation, and model validation strategies. We highlight challenges in this area stemming from small and imbalanced datasets, the need to balance non-glamorous data work and stakeholder priorities, and closed data sharing practices. Stemming from these findings, we discuss avenues for further work in developing and deploying hate speech detection systems for low-resource languages.},
keywords = {computational journalism, ICTD, social computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Given Myanmar's historical and socio-political context, hate speech spread on social media have escalated into offline unrest and violence. This paper presents findings from our remote study on the automatic detection of hate speech online in Myanmar. We argue that effectively addressing this problem will require community-based approaches that combine the knowledge of context experts with machine learning tools that can analyze the vast amount of data produced. To this end, we develop a systematic process to facilitate this collaboration covering key aspects of data collection, annotation, and model validation strategies. We highlight challenges in this area stemming from small and imbalanced datasets, the need to balance non-glamorous data work and stakeholder priorities, and closed data sharing practices. Stemming from these findings, we discuss avenues for further work in developing and deploying hate speech detection systems for low-resource languages.
2.
Unaiza Ahsan, Munmun De Choudhury, Irfan Essa
Towards Using Visual Attributes to Infer Image Sentiment Of Social Events Proceedings Article
In: Proceedings of The International Joint Conference on Neural Networks, International Neural Network Society, Anchorage, Alaska, US, 2017.
@inproceedings{2017-Ahsan-TUVAIISSE,
title = {Towards Using Visual Attributes to Infer Image Sentiment Of Social Events},
author = {Unaiza Ahsan and Munmun De Choudhury and Irfan Essa},
url = {https://ieeexplore.ieee.org/abstract/document/7966013},
doi = {10.1109/IJCNN.2017.7966013},
year = {2017},
date = {2017-05-01},
urldate = {2017-05-01},
booktitle = {Proceedings of The International Joint Conference on Neural Networks},
publisher = {International Neural Network Society},
address = {Anchorage, Alaska, US},
abstract = {Widespread and pervasive adoption of smartphones has led to instant sharing of photographs that capture events ranging from mundane to life-altering happenings. We propose to capture sentiment information of such social event images leveraging their visual content. Our method extracts an intermediate visual representation of social event images based on the visual attributes that occur in the images going beyond sentiment-specific attributes. We map the top predicted attributes to sentiments and extract the dominant emotion associated with a picture of a social event. Unlike recent approaches, our method generalizes to a variety of social events and even to unseen events, which are not available at training time. We demonstrate the effectiveness of our approach on a challenging social event image dataset and our method outperforms state-of-the-art approaches for classifying complex event images into sentiments.
},
keywords = {computational journalism, computer vision, IJNN, machine learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Widespread and pervasive adoption of smartphones has led to instant sharing of photographs that capture events ranging from mundane to life-altering happenings. We propose to capture sentiment information of such social event images leveraging their visual content. Our method extracts an intermediate visual representation of social event images based on the visual attributes that occur in the images going beyond sentiment-specific attributes. We map the top predicted attributes to sentiments and extract the dominant emotion associated with a picture of a social event. Unlike recent approaches, our method generalizes to a variety of social events and even to unseen events, which are not available at training time. We demonstrate the effectiveness of our approach on a challenging social event image dataset and our method outperforms state-of-the-art approaches for classifying complex event images into sentiments.
3.
Unaiza Ahsan, Irfan Essa
Towards Story Visualization from Social Multimedia Proceedings Article
In: Proceedings of Symposium on Computation and Journalism, 2014.
@inproceedings{2014-Ahsan-TSVFSM,
title = {Towards Story Visualization from Social Multimedia},
author = {Unaiza Ahsan and Irfan Essa},
url = {http://compute-cuj.org/cj-2014/cj2014_session5_paper2.pdf
http://symposium2014.computation-and-journalism.com/},
year = {2014},
date = {2014-10-01},
urldate = {2014-10-01},
booktitle = {Proceedings of Symposium on Computation and Journalism},
keywords = {computational journalism, computational photography},
pubstate = {published},
tppubtype = {inproceedings}
}
4.
N. Diakopoulos, I. Essa
Modulating Video Credibility via Visualization of Quality Evaluations Proceedings Article
In: WWW Workshop on Information Credibility on the Web (WICOW), 2010.
@inproceedings{2010-Diakopoulos-MVCVQE,
title = {Modulating Video Credibility via Visualization of Quality Evaluations},
author = {N. Diakopoulos and I. Essa},
doi = {10.1145/1772938.1772953},
year = {2010},
date = {2010-04-01},
booktitle = {WWW Workshop on Information Credibility on the Web (WICOW)},
keywords = {computational journalism},
pubstate = {published},
tppubtype = {inproceedings}
}
5.
N. Diakopoulos, S. Goldenberg, I. Essa
Videolyzer: Quality Analysis of Online Informational Video for Bloggers and Journalists Proceedings Article
In: ACM CHI Conference on Human factors in Computing Systems, pp. 799-808, 2009.
@inproceedings{2009-Diakopoulos-VQAOIVBJ,
title = {Videolyzer: Quality Analysis of Online Informational Video for Bloggers and Journalists},
author = {N. Diakopoulos and S. Goldenberg and I. Essa},
doi = {10.1145/1518701.1518824},
year = {2009},
date = {2009-04-01},
booktitle = {ACM CHI Conference on Human factors in Computing Systems},
pages = {799-808},
keywords = {computational journalism},
pubstate = {published},
tppubtype = {inproceedings}
}
6.
N. Diakopoulos, K. Luther, Y. Medynskiy, I. Essa
The Evolution of Authorship in a Remix Society Proceedings Article
In: ACM Conference on Hypertext and Hypermedia, ACM Press, Manchester, UK, 2007.
@inproceedings{2007-Diakopoulos-EARS,
title = {The Evolution of Authorship in a Remix Society},
author = {N. Diakopoulos and K. Luther and Y. Medynskiy and I. Essa},
doi = {10.1145/1286240.1286272},
year = {2007},
date = {2007-09-01},
booktitle = {ACM Conference on Hypertext and Hypermedia},
publisher = {ACM Press},
address = {Manchester, UK},
abstract = {Authorship entails the constrained selection or generation of media and the organization and layout of that media in a larger structure. But authorship is more than just selection and organization; it is a complex construct incorporating concepts of originality, authority, intertextuality, and attribution. In this paper we explore these concepts and ask how they are changing in light of modes of collaborative authorship in remix culture. We present a qualitative case study of an online video remixing site, illustrating how the constraints of that environment are impacting authorial constructs. We discuss users' self-conceptions as authors, and how values related to authorship are reflected to users through the interface and design of the site's tools. We also present some implications for the design of online communities for collaborative media creation and remixing.},
keywords = {computational journalism},
pubstate = {published},
tppubtype = {inproceedings}
}
Authorship entails the constrained selection or generation of media and the organization and layout of that media in a larger structure. But authorship is more than just selection and organization; it is a complex construct incorporating concepts of originality, authority, intertextuality, and attribution. In this paper we explore these concepts and ask how they are changing in light of modes of collaborative authorship in remix culture. We present a qualitative case study of an online video remixing site, illustrating how the constraints of that environment are impacting authorial constructs. We discuss users' self-conceptions as authors, and how values related to authorship are reflected to users through the interface and design of the site's tools. We also present some implications for the design of online communities for collaborative media creation and remixing.
Other Publication Sites
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.