The US Department of Defense (DoD) is turning to artificial intelligence (AI) by developing military drones capable of facial recognition technology to find their targets easily. However, experts have some concerns about this approach.

AI Drones in Development

The US DoD has signed an $800,000 contract with Seattle-based company RealNetworks to develop autonomous drones equipped with machine learning to identify faces without human intervention. Based on the contract, it will be used by special operations abroad for identification and intelligence-gathering.

Prior to this, several companies had already incorporated artificial intelligence into drones, but they were developed for a different purpose, according to BuiltIn.

For instance, Brinc's LEMUR S has an AI drone with quad-copter design and vision capabilities. It has a 31-minute flight time and is ideal for first responders, searches, and rescue teams in high-risk situations.

Skydio's autonomous drone is a self-flying drone with several different styles of video capture. They are used in various fields, including sports.

Shield AI's "Hivemind Nova" drone is programmed to help law enforcement and military personnel in reconnaissance missions. They can access GPS-denied areas to gather ground-level intelligence.

Facial recognition has already been used in several countries, including China and the United Arab Emirates. United States is not the only country developing AI drones with facial technology because Libya is doing the same, according to a UN report. However, there are some serious concerns about it.

ALSO READ:  Why Drone Delivery Has Lower Carbon Emission Than Truck Delivery

A.I. Drone With Facial Recognition Raises Ethical Concerns

While DoD has a reason for its goal to have an AI drone with facial recognition technology, some are concerned about the threats it could bring to the human target.

Nicholas Davis, an industry professor of emerging technology at the University of Technology Sydney, told Newsweek that there are numerous ethical implications. For instance, such devices might redistribute power and threaten groups within a society.

He added that tracking people of interest is not new, and artificial intelligence's facial recognition offers an extra layer of technology. The major concern is using facial recognition technology and AI that could target individuals even before committing a crime.

Mike Ryder, a marketing lecturer and robotics, AI, war, ethics, and drones researcher at Lancaster University, stressed that the concern is not so much the use of technology to track the individual but the decision-making process that lists them to be a "person of note" to be tracked in the first place. He was wondering about the reason that put those individuals on the list of being targeted and whether they were a potential threat or not.

He added that it is the key ethical dilemma at the heart of modern drone operations overseas because the US and its allies use drones to preemptively strike - assassinate, kill, or murder targets perceived as threats.

Ryder said the pre-emptive nature of those operations is problematic due to the assumption that the target has the potential to carry out a terror attack in the future.

Edward Santow, an industry professor of responsible technology at the University of Technology Sydney, is concerned with the facial recognition technology's accuracy, noting that it remains an experimental technology and could cause unlawful death or injury.

Toby Walsh, a professor of AI at the University of New South Wales in Sydney, echoed the same sentiment. He added that the technology is biased and ineffective among people of color. He noted that errors from such technology could be fatal.

RELATED ARTICLE: Drone Delivery Services Spark Imagination Among Retailers, Consumers, and Packaging Professionals

Check out more news and information on Technology in Science Times.