It is estimated that almost 400 natural disasters occur each year, causing hundreds of thousands of casualties and material damage valued at more than 150 billion dollars. With more than 3 billion people online, it is no surprise that images from disaster scenes end up posted in social media venues like Facebook, often as the devastation is unfolding. Professor Mariette Awad and her research team in the Maroun Semaan Faculty of Engineering and Architecture at the American University of Beirut wondered whether these images could be useful to first responders.
At the 12th International Conference on Signal-Image Technology and Internet-Based Systems in Naples, Italy, Awad and her students Hadi S. Jomaa and Yara Rizk, reported a new automated approach for humanitarian computing to mine online images and the words people attach to them in order to rapidly identify disasters and classify them by type. The system is in its early stages of development but it has so far proven to be more than 95% accurate in its ability to categorize damage.
To create the system, the AUB researchers uploaded images gathered online and divided them into two broad categories of damage: infrastructure and nature. Colors, shape, texture, and feature combinations help the system to identify the image category. They then developed two “bags of words” to describe the images in each category. Focusing on high frequency words allowed them to manage the range of language possible. Professor Awad explained that “this system can grow and learn.” Professor Awad hopes that humanitarian computing can be used to prioritize first responders and save lives in war zones as well as natural disaster sites.