Drones and distrust in humanitarian aid
Drones are an increasingly common tool in humanitarian aid, but issues revolving around public perception and trust continue to slow their global rollout during disasters. Humanitarians can use new technologies that make it easier to tell drones apart from one another in flight to justify trust in the technology: they will also benefit from more research into why people distrust drones, and how the data that drones collect is being used in the communities that they serve.
In this post, drone technology researcher Faine Greenwood describes how the international aid community and private industry can address the long-standing problem of drone distrust with a combination of improved technology, expanded research into public opinion, and a better grasp of the risks that drone technology may present to the public.
Humanitarians are using drones more widely than ever. Their benefits are apparent: they are an inexpensive and relatively easy-to-use way to collect valuable aerial data during disasters, allowing aid workers to expand their perspective on potentially dangerous situations while reducing physical risk to themselves. While small drones are no substitute for direct human contact (and do not appear to be used that way), aid workers can use them to gather real-time data on complex situations fast – a force-multiplier that allows smaller teams of aid workers to collect more decision-supporting information than was ever possible in the recent past.
But as popular as drones now are in the humanitarian world, one theme has constantly complicated their wider adoption: trust. In the aid world, most practitioners operate under the not-unreasonable assumption that drones – even small consumer drones that bear no resemblance to an armed Predator UAS – are likely to frighten and intimidate the people that they are attempting to help.
This concern has driven the aid world’s cautious approach to using the technology, especially in responses to conflict. The use of humanitarian drones in conflict environments has long been something of a red line, as enunciated by Daniel Gilman and Matthew Easton in a 2014 OCHA report. There are good reasons for this: it’s extremely hard for people on the ground to identify a drone or what it is doing. The data that drones collect, were it to fall into the wrong hands, can be used to harm and to target people, a risk that is heightened in conflict settings. Compounding the problem, humanitarian aid workers, militaries, and other armed groups often use identical consumer drones (which are widely available) – making it far too possible for a humanitarian drone to be confused with a drone being flown by a non-neutral actor or organization.
In today’s world, natural disasters and political conflict are not always clearly divided. Humanitarian drones have become more and more widely used in complex environments, such as in refugee camps close to border regions, like the Cox’s Bazar camps on the border between Myanmar and Bangladesh. As drones become an ever more regular feature of our humanitarian aid efforts, we will need to work harder than ever – and work smarter – to build and maintain public trust in the technology.
Humanitarians, perhaps better than anyone else, recognize that public perception of what we do and why we are doing it matters just as much (if not more) than our actions themselves. The public’s perception of our neutrality could be grievously damaged if a drone that looks a lot like our drones is used to drop a bomb, or collects data that is then used to target vulnerable people.
The humanitarian world has acted to find ways to balance the value of drone technology against the equal importance of protecting the privacy and safety of people impacted by disaster. In 2014, the UAViators Code of Conduct, drafted by a group of humanitarian practitioners, introduced the first set of best practices for drone use geared specifically towards aid: the document is being revised as of 2021, with a strong focus on community engagement. The ICRC’s Handbook on Data Protection in Humanitarian Action has valuable information on how drone data can be collected safely and ethically. These efforts should be continued and commended. But there is always more we can do to build public trust in the technology that we use. Here are some ideas.
How to build trust in drone technology
First, new technical developments can help us build trust in drone technology. Around the world, governments are starting to roll out remote identification systems for drones, as part of a larger global push towards UTM, or unmanned traffic management systems. These are systems that make small drones identifiable in airspace (via digital or analog means), in a way that is similar to how manned aircraft identify themselves. In the near future, these systems could, at least in theory, give all actors in a given disaster area a more sophisticated means of telling drones apart from each other. Some versions of these systems could even allow anyone on the ground to pull up a smartphone application that identifies the drones they’ve spotted overhead, and gives them some basic information about who is flying it and what its purpose is. This could reduce uncertainty during tense or dangerous situations, and make it harder for drones to be mixed up with one another.
However, these systems are inherently limited, and will likely require an operational UTM or unmanned traffic management system (not a given during disaster) and mobile phone access. Setting up temporary remote ID systems is possible, but remains a very novel idea.
Second, drone manufacturers have a major part to play when it comes to humanitarian drones and trust. We as aid workers must be able to trust that the consumer drones we buy are well-secured and can’t easily be hacked or compromised: the responsibility for this lies in large part (if not entirely) on the manufacturer. Drone manufacturers also need to help humanitarians protect their neutrality.
Consumer drone companies market their products to a very broad set of actors: police, militaries (as a supplement to military-designed drones), the general public, and humanitarian aid workers. It benefits these companies to be able to point out that humanitarian aid workers use their products. At the same time, these companies also often highlight their police and military customers in their advertising materials as well. In the absence of any clear mechanism for telling humanitarian drones from non-humanitarian drones (like police and militaries), humanitarians may have to stop using consumer drone products simply to protect their neutrality. If consumer drone companies want to continue enjoying their association with humanitarian organizations, they will need to work closely with aid workers to develop better techniques, tools, and technologies for ensuring that humanitarian drones can be differentiated from the drones flown by everybody else.
Knowing why people distrust drones is key
Third, while we need to develop better technical methods for building trust in drone technology, we also need more research and more information on why people distrust drones, and how different people feel about the technology across the world. Instead of assuming how people might feel about drones, we need to work harder to ask them ourselves. Limited research on civilian drones and public perception exists, and most of that research comes from Western countries.
What are the risks of drone data?
Fifth and finally, we need a better understanding of what the risks connected to drone data actually are. Although most drone users have a general idea of which drone operations are riskier than others, there is little concrete evidence or research that we can refer to that validates these perceptions. In the drone world, we assume that lower altitude drone flights are more likely to capture detailed imagery that can be used to identify a person: do we know how to quantify the risk for that person? Are there techniques or best practices we could be using to redact or modify drone data to ensure that it contains no Personally identifiable information– and if we use these techniques, where might that fit into the legal frameworks we operate under? Do we have concrete case studies or examples of incidents where drone data was used to harm people? The better we understand the risks that drone data presents to both people affected by disaster and to aid workers, the better we can address these risks – and better earn the trust of the people we work with.
If the history of technology is any guide, drones are in a transition phase, shifting from the new to the mundane. We can’t brute-force our way into convincing people to accept new technology during that transition phase. Now, we have an opportunity to demonstrate to the public that they can trust humanitarians to use drones responsibly, in ways that take cultural and contextual differences into account. What happens next depends on us.