Notes from the Field: Using Technology for Qualitative Monitoring and Evaluation
I attended my first Technology Salon meeting focused on using Information and Communication Technologies (ICTs) for qualitative monitoring and evaluation—this particular session was held at the Rockefeller Foundation in New York. The room was filled with approximately 20 professionals in the areas of knowledge management, program evaluation, and IT from various organizations such as Catholic Relief Services, MSF/Doctors Without Borders, the World Bank, and UNICEF.
John Hecklinger of GlobalGiving, Ian Thorpe of the United Nations Development Operations Co-ordination (UNDOC) Office, and Emily Jacoby of Digital Democracy shared their lessons learned from “qualitative monitoring and evaluation”—basically, collecting and analyzing non-numerical data. Examples of these projects ranged from gathering community feedback through digital and conversational storytelling to crowdsourcing ideas for social change. Each discussant highlighted the opportunities and challenges in organizing the unstructured data of conversations, photographs, and video to move from anecdotes to action.
There are three things that resonated with me during the discussion:
- Video is a powerful tool for philanthropy—seeing and hearing is more effective than a written report on its own. For donors and advisors focused on impact, the key will be understanding which videos are backed by facts and a reliable evidence base, versus videos that are pure advertising.
- People’s opinions and perceptions about a program’s services are sometimes more important than the hard data that is collected. In addition to tracking project deadlines and other key program indicators, it is important to ask lots of questions about why people do what they do (or don’t do what you thought they would do), and how those beliefs and reasons are likely to shape the success or failure of a particular approach to a problem.
Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?”- Tip #1: Perception may trump hard data
- “Tagging” or cataloging digital data is a crucial step in beginning to analyze information—but it doesn’t stop there. Defining how the data will be used, even before you collect it, is critical. This is true not just for social impact analysts like the members of our team, but for individual and institutional funders as well. Knowing why you need the data before you collect it prevents data collection efforts from becoming merely a compliance chore or a time sink that takes valuable resources away from impact.
The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.- Tip #3: Stories and tags are not enough
You can find a full list of 10 Tips based on last week’s discussion here. I look forward to more of these types of workshops to learn how we can use tools and methods traditionally reserved for hard data (e.g. numbers) and apply or alter them so that both soft and hard data can be used for social change. We can use technology to get smarter on solutions by including the communities we aim to help. These same communities can also learn to use technology to find their voice and their own solutions.