My Ph.D. Topic

Topic/Title – Examining Human Factors of AI in surveillance based autonomous UAVs in Australian law enforcement and border control contexts.

 

Short description (the elevator pitch – or how do I explain my research in 10 seconds)

This project aims to ascertain the role AI has in surveillance-based UAVs within the Australian law enforcement and border control contexts.  Specific focus is placed on the Human Factors aspects of UAVs in these environments and this includes determining the capability remote pilots have when piloting UAVs with AI functionality.  This project also seeks to analyse the decision process used when UAVs in these contexts encounters or identifies a suspected person, vehicle or incident.  The final aspect of this research is to identify whether community safety protocols are a limiting factor during the mission of an AI based surveillance UAV.

The long description (and reasoning)

The International Civil Aviation Organization (ICAO) defines Human Factors as being “concerned with the application of what we know about human beings, their abilities, characteristics and limitations, to the design of equipment they use, environments in which they function and jobs they perform” (ICAO, 2021), CASA defines the concept as “referring to the wide range of issues affecting how people perform tasks in their work and non-work environments” (CASA, 2019).  

Recognising the enormous opportunities but also the possibilities of risk and safety concerns of AI-based systems, delegates from 27 countries, along with representatives from leading AI companies attended the inaugural AI Safety Summit at Bletchley Park, England in November 2023.  The outcome from this summit was the recognition that an international effort must be initiated for the design and usage of AI systems that caters for “…the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection…” (U.K. Government, 2023).

The above statement is specifically composed for AI systems in a general and internal sense and is relating to the data matching operations being undertaken by the AI systems.  However, similar concerns should also be raised about the external aspects of AI – that is, the users (or the human interface) controlling or operating the AI-based machines.

Advances in AI technologies have witnessed the rise of automated identification systems e.g., facial and number plate recognition systems (Feldstein, 2019).  A natural merging of technologies has seen UAVs being used in law enforcement and border control contexts utilising both facial and number plate recognition facilities using AI technologies (Koslowski & Schulzke, 2018).

Research has indicated though that many of the recognition systems that are in place are not accurate.  Factors such as the angle of vision of the UAV (Shanthi et al., 2021; Dasilva et al., 2017), issues relating to lighting and shadowing constraints (Kaimkhani et al., 2022), possible movement of the potential targets (Srivastava et al., 2022) and even racial bias – that is, the racial heritage of the target(s) being identified can cause issues relating to inaccurate identification and mistaken identity (Tol, 2019).

While the Blatchley Statement (U.K. Government, 2023) is referring to the internal aspects of AI in terms of accountability, fairness and explainability – questions do exist concerning the three-way relationship between the remote pilot(s) and usage of both AI and UAV(s) in law enforcement and border control environments (in both Australian and international contexts).

Essentially, what capacity does the remote pilot(s) have during these missions and, does the usage of AI in these missions present any issue – either technical, logistical or safety related for the respective remote pilot(s).  Following the recommendations of the AI Safety Summit (U.K. Government, 2023), do the remote pilots of AI-based missions in law enforcement and border control contexts have the same level of responsibility in terms of accountability, fairness, explainability and human oversight?

Statement of the principal focus of intended research

The principle focus of this research is to clarify the role AI plays in UAVs in Australian law enforcement and border control contexts and to determine what capacity the human element i.e., the remote pilots, have to address, analyse and respond to any potential errors, risks or safety related issues that may arise during the planning and actioning of the missions.

Significance of the study

This research project has practical outcomes.  By drawing from the Bletchley Statement (U.K. Government, 2023), this research focusses on the human element during the usage of AI-based UAVs in Australian law enforcement and border control contexts – namely, what is the role of AI in those environments when utilising a UAV and do the remote pilots have the capability, or capacity for accountability, safety, human oversight, ethics, privacy and bias mitigation during a mission? 

From a regulatory perspective, researchers, policy advisors and regulators will be able to draw upon the outcome of this research to assist in laying the foundation for the development of a conceptual framework that governs the usage of AI based UAVs in the surveillance, law enforcement and security sectors.

If you are interested in the topic of my Ph.D. Research, then please contact me via this link.

 

References

 

CASA,( 2019). Safety behaviours: human factors for pilots- Resources booklet 1. Civil Aviation Safety Authority, Canberra. Australia.

Dasilva, J., Jimenez, R., Schiller, R., & Gonzalez, S. Z. (2017). Unmanned Aerial Vehicle-based Automobile License Plate Recognition System for Institutional Parking Lots. SYSTEMICS, CYBERNETICS AND INFORMATICS, 15(5), 39-43.

Feldstein, S. (2019). The Global Expansion of AI Surveillance. Carnegie Endowment for International Peace. Washington DC 20036.

ICAO (2021). The Manual on HP for Regulators (Doc 10151). International Civil Aviation Organization. Montreal, Canada.

Kaimkhani, N. A. K., Noman, M., Rahim, S., & Liaqat, H. B. (2022). UAV with Vision to Recognise Vehicle Number Plates. Mobile Information Systems, 2022. https://doi.org/10.1155/2022/7655833

Koslowski, R., & Schulzke, M. (2018). Drones Along Borders: Border Security UAVs in the United States and the European Union. International Studies Perspectives, 19, 305–324. https://doi.org/10.1093/isp/eky002

McCarthy, John. (2007). What is Artificial Intelligence.  Stanford University. California, U.S.A.

Shanthi, K. G., Sivalakshmi, P., Sesha, V. S., & Sangeetha, S. K. (2021). Smart drone with real time face recognition. Materials Today: Proceedings, 80. https://doi.org/10.1016/j.matpr.2021.07.214

Srivastava, A., Badal, T., Saxena, P., Vidyarthi2, A., & Singh, R. (2022). UAV surveillance for violence detection and individual identification. Automated Software Engineering. https://doi.org/10.1007/s10515-022-00323-3

Tol, J. (2019). Ethical Implications of Face Recognition Tasks in Law Enforcement [Master’s Thesis, University of Amsterdam].

U.K. Government, (2023). The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.  Policy Document. The Stationery Office, London,  U.K.

Scroll to Top