ENCAPSULATION OF IMAGE METADATA FOR EASE OF RETRIEVAL AND MOBILITY
Article Sidebar
Open full text
Main Article Content
DOI
Authors
Abstract
Increasing proliferation of images due to multimedia capabilities of hand-held devices has resulted in loss of source information resulting from inherent mobility. These images are cumbersome to search out once stored away from their original source because they drop their descriptive data. This work, developed a model to encapsulate descriptive metadata into the Exif section of image header for effective retrieval and mobility. The resulting metadata used for retrieval purposes was mobile, searchable and non-obstructive.
Keywords:
References
mobile market statistics you should know in 2016. (2016, August 22). In Afilias Technologies Ltd. Retrieved from Device Atlas: https://deviceatlas.com/blog/16-mobile-market-statisticsyou-should-know-2016
Ames, M., & Naaman, M. (2007). Why We Tag: Motivations for Annotation in Mobile and Online Media. CHI 2007, Tags, Tagging & Notetaking (pp. 971-980). California: ACM. https://doi.org/10.1145/1240624.1240772 DOI: https://doi.org/10.1145/1240624.1240772
Chaffey, D. (2016). Global social media research summary 2016. Retrieved August 22, 2016, from Smart Insights: http://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/
Duygulu, P., Barnard, K., Freitas, N., & Forsyth, D. (2002). Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. The 7th European Conference on Computer Vision (pp. 97–112). Copenhagen. Extensible Metadata Platform (XMP). (2014, January 8). In Adobe Systems Incorporated. Retrieved from Adobe Systems Incorporated Web site: http://www.adobe.com/products/xmp.html DOI: https://doi.org/10.1007/3-540-47979-1_7
Feng, Y., & Lapata, M. (2008). Automatic Image Annotation Using Auxiliary Text Information. Association for Computational Linguistics -08 (pp. 272–280). Columbus: Association for Computational Linguistics.
Gozali, J. P., Kan, M.-Y., & Sundaram, H. (2012). How do people organize their photos in each event and how does it affect storytelling, searching and interpretation tasks? Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries (pp. 315–324). Washington, DC: ACM New York. https://doi.org/10.1145/2232817.2232875 DOI: https://doi.org/10.1145/2232817.2232875
Hanbury, A. (2008). A survey of methods for image annotation. Journal of Visual Languages & Computing, 19(5), 617–627. https://doi.org/10.1016/j.jvlc.2008.01.002 DOI: https://doi.org/10.1016/j.jvlc.2008.01.002
Internet World Stats. Usage and population Statistics. (2016, August 22). In Internet World Stats. Retrieved from Internet World Stats: http://www.internetworldstats.com/stats1.htm
IPTC Photo Metadata Standard. (2016, January 22). In International Press Telecommunications Council. Retrieved from International Press Telecommunications Council Website: https://iptc.org/standards/photo-metadata/iptc-standard
Ivasic-Kos, M., Pobar, M., & Ribaric, S. (2016). Two-tier image annotation model based on a multilabel classifier and fuzzy-knowledge representation scheme. Pattern Recognition, 52, 287–305. https://doi.org/10.1016/j.patcog.2015.10.017 DOI: https://doi.org/10.1016/j.patcog.2015.10.017
Jaimes, A. (2006). Human Factors in Automatic Image Retrieval System Design and Evaluation. Proceedings of SPIE Vol. #6061, Internet Imaging VII. San Jose, CA, USA. https://doi.org/10.1117/12.660255 DOI: https://doi.org/10.1117/12.660255
Japan Electronics and Information Technology Industries Association. (2002). Exchangeable image file format for digital still cameras: Exif Version 2.3. Japan: Japan Electronics and Information Technology Industries Association.
Jeon, J., Lavrenko, V., & Manmatha, R. (2003). Automatic Image Annotation and Retrieval using Cross-Media Relevance Models. SIGIR’03. Toronto: ACM. https://doi.org/10.1145/860435.860459 DOI: https://doi.org/10.1145/860435.860459
Kuric, E., & Bielikovan, M. (2015). ANNOR: Efficient image annotation based on combining local and global features. Computers & Graphics, 47, 1–15. https://doi.org/10.1016/j.cag.2014.09.035 DOI: https://doi.org/10.1016/j.cag.2014.09.035
Kustanowitz, J., & Shneiderman, B. (2005). Motivating Annotation for Personal Digital Photo Libraries: Lowering Barriers While Raising Incentives. Tech. Report HCIL–2004–18. University of Marlyand.
Lavrenko, V., Manmatha, R., & Jeon, J. (2003). A model for learning the semantics of pictures. The 16th Conference on Advances in Neural Information Processing Systems. Vancouver.
Makadia, A., Pavlovic, V., & Kumar, S. (2008). A New Baseline for Image Annotation. In T. P. Forsyth D. (Ed.), ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III (pp. 316–329). Berlin, Heidelberg: Springer. https://doi.org/0.1.1.145.9205 DOI: https://doi.org/10.1007/978-3-540-88690-7_24
Monthly Subscriber Data. (2017, August 22). In The Nigerian Communications Commission. Retrieved from NCC Subscriber Statistics: http://ncc.gov.ng/index.php?option=com_content&view=article&id=125:subscriber-statistics&catid=65:industryinformation&Itemid=73
Mori, Y., Takahashi, H., & Oka, R. (1999). Image-to-word transformation based on dividing and vector quantizing images with words. Proceedings of the 1st International Workshop on Multimedia Intelligent Storage and Retrieval Management. Orlando. https://doi.org/10.1.1.31.1704
National Information Standards Organization. (2004). Understanding Metadata. Bethesda, USA: NISO Press.
National Information Standards Organization. (2015). RLG Technical Metadata for Images Workshop Report. Retrieved from National Information Standards Organization: http://www.niso.org/imagerpt.html
Numbers, Facts and Trends Shaping Your World. (2015, July 20). In Pew Research Centre. Retrieved from http://www.pewresearch.org
Pew Global. (2016, August 22). In Pew Research Center. Retrieved from Pew Research Center, Global Attitudes & Trends: http://www.pewglobal.org/2015/04/15/cell-phones-in-africacommunication-lifeline
Rodden, K., & Wood, K. R. (2003). How Do People Manage Their Digital Photographs? SIGCHI Conference on Human Factors in Computing Systems (pp. 409–416). Florida: ACM New York. https://doi.org/10.1145/642611.642682 DOI: https://doi.org/10.1145/642611.642682
Smartphone Users Worldwide Will Total 1.75 Billion in 2014. (2015, July 20). In eMarketer. Retrieved from http://www.emarketer.com/Article/Smartphone-Users-Worldwide-WillTotal-175-Billion-2014/1010536
Smith, A. (2015). US smartphone use in 2015. Retrieved July 20, 2015, from Pew Research Centre: http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/
Strydom, T. (2015). Facebook rakes in users in Nigeria and Kenya, eyes rest of Africa. Retrieved August 22, 2016, from Reuters: http://www.reuters.com/article/us-facebook-africaidUSKCN0RA17L20150910
Wang, X.-J., Zhang, L., Jing, F., & Ma, W.–Y. (2006). AnnoSearch: Image Auto-Annotation by Search. The 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE. doi:10.1109/CVPR.2006.58 DOI: https://doi.org/10.1109/CVPR.2006.58
Wenyin, L., Dumais, S., Sun, Y., Zhang, H., Czerwinski, M., & Field, B. (2001). Semi-Automatic Image Annotation. INTERACT '01: IFIP TC13 International Conference on HumanComputer Interaction (pp. 326–333). IOS Press.
Weston, J., Bengio, S., & Usunier, N. (2010). Large scale image annotation: learning to rank with joint word-image embeddings. Machine Learning, 81, 21–35. https://doi.org/10.1007/s10994-010-5198-3 DOI: https://doi.org/10.1007/s10994-010-5198-3
Woods, N. C. (2017). Low-level Multimedia Recognition and Classification for Intelligence and Forensic Analysis. Unpublished Thesis.
World Population Review. (2017, October 3). In World Population Review. Retrieved from http://worldpopulationreview.com/countries/nigeria-population
Zhang, D., Islam, M. M., & Lu, G. (2012). A review on automatic image annotation techniques. Pattern Recognition, 45(1), 346–362. https://doi.org/10.1016/j.patcog.2011.05.013 DOI: https://doi.org/10.1016/j.patcog.2011.05.013
Article Details
Abstract views: 154
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in Applied Computer Science are open-access and distributed under the terms of the Creative Commons Attribution 4.0 International License.