Using AI to Manage Police Officer Cognitive Limits Enhancing Decision Making, Recruitment and Retention: A Latent Inference Budgeting Approach
Executive Summary
Police officers are expected to perform near perfectly in complex and dynamic situations, which can affect their wellness and retention. However, human decision making is subject to cognitive biases, heuristics, and limitations, which can lead to errors, mistakes, and sub-optimal choices. AI technologies, such as virtual reality, training models, and decision support systems, can enhance officers' performance, but also pose ethical, social, and technical challenges. Furthermore, AI technologies can create a paradox of perfection, where the use of AI gadgets and widgets can raise the standards and demands for police officers to perform flawlessly, while ignoring the human factors and limitations that may hinder their performance and at some point, it must be acknowledged that humans can not be expected to perform beyond human. However, with AI predicting when humans can fail or make suboptimal choices given a depleted latent budget, other support can be provided to procure optimal outcomes. Perhaps, officers have already been pushed beyond perfection in many cases which is a root cause of retention and recruitment that remains unaddressed?
DALL E 3
This paper proposes a novel approach to use AI to support police officers' decision making, based on the concept of latent inference budgeting. Latent inference budgeting is a framework to identify and manage the human computational constraints and sub-optimal choices that officers may make in stressful and uncertain scenarios. By using AI to monitor, alert, and intervene in officers' decisions, latent inference budgeting can help officers achieve optimal outcomes, while reducing the risks of fatigue, stress, and burnout. This paper discusses the potential benefits, limitations, and implications of latent inference budgeting for police officers, and provides recommendations for future research and development.
Introduction
Police officers are expected to perform a variety of tasks and roles in their daily work, such as enforcing laws, preventing crimes, maintaining order, protecting citizens, and responding to emergencies. These tasks and roles require officers to make quick and accurate decisions, often under high levels of stress, uncertainty, and ambiguity. Moreover, the consequences of police officers' decisions can have significant impacts on their own safety, well-being, and reputation, as well as on the public trust, legitimacy, and accountability of the police force. Therefore, police officers face increasing expectations and pressures to perform near perfectly in complex and dynamic situations, which can affect their wellness and retention.
However, human decision making is subject to cognitive biases, heuristics, and limitations, which can affect the quality and accuracy of the decisions. Some of the common cognitive biases and heuristics that can affect human decision making are confirmation bias, availability heuristic, anchoring effect, framing effect, hindsight bias, and overconfidence bias. Some of the common cognitive limitations that can affect human decision making are attention, memory, processing speed, and mental workload. Cognitive biases, heuristics, and limitations can lead to errors, mistakes, and sub-optimal choices, especially in complex and dynamic situations, where the information is incomplete, uncertain, or conflicting, and the time and resources are limited[1].
AI technologies, such as virtual reality, training models, and decision support systems, can enhance officers' performance, by providing them with realistic simulations, feedback, guidance, and information. Virtual reality (VR) can be used to train police officers in various skills and competencies, such as use of force, de-escalation, communication, situational awareness, and decision making. VR can provide officers with realistic and diverse scenarios, where they can practice and test their responses, while receiving immediate and personalized feedback. VR can also help officers to develop empathy, perspective taking, and cultural sensitivity, by exposing them to different viewpoints, backgrounds, and experiences. Training models are computational models that use data and algorithms to generate, evaluate, and improve the performance of trainees. Training models can be used to assess and enhance the skills and competencies of police officers, by providing them with objective and adaptive feedback, guidance, and recommendations. Training models can also help officers to identify and correct their cognitive biases, heuristics, and limitations, by providing them with alternative scenarios, outcomes, and explanations. Decision support systems (DSS)[2] are information systems that use data and algorithms to assist users in making decisions, by providing them with relevant and timely information, analysis, and suggestions. DSS can be used to support police officers in various tasks and roles, such as enforcing laws, preventing crimes, maintaining order, protecting citizens, and responding to emergencies. DSS can provide officers with access to various sources and types of information, such as criminal records, social media, surveillance, and biometrics. DSS can also provide officers with analysis and suggestions, such as risk assessment, threat detection, crime prediction, and resource allocation. DSS are a valuable integration in a RTC.
However, AI technologies also pose ethical, social, and technical challenges, such as privacy, bias, transparency, reliability, and security. AI technologies can also pose privacy challenges, by creating digital profiles, traces, and inferences, that can reveal the identity, preferences, and activities of the individuals, and expose them to surveillance, tracking, and profiling. Bias is the systematic deviation from accuracy, fairness, or impartiality, in the representation, analysis, or output of data and algorithms. AI technologies can pose bias challenges, by reflecting, amplifying, or creating biases, that can affect the quality and accuracy of the decisions, and the outcomes and consequences for the individuals and groups. AI technologies can also pose bias challenges, by discriminating, excluding, or harming certain individuals and groups, based on their characteristics, such as race, gender, age, or religion, and violating their rights, dignity, and justice. Transparency is the degree to which the data, algorithms, and decisions of AI technologies are accessible, understandable, and explainable, to the users and stakeholders. AI technologies can pose transparency challenges, by being opaque, complex, or incomprehensible, in their data, algorithms, and decisions, and preventing the users and stakeholders from knowing, understanding, or questioning their logic, rationale, or evidence. AI technologies can also pose transparency challenges, by being unaccountable, irresponsible, or unregulated, in their data, algorithms, and decisions, and avoiding the users and stakeholders from monitoring, evaluating, or challenging their quality, accuracy, or impact. Reliability is the degree to which the data, algorithms, and decisions of AI technologies are consistent, accurate, and valid, in their representation, analysis, and output. AI technologies can pose reliability challenges, by being inconsistent, inaccurate, or invalid, in their data, algorithms, and decisions, and affecting the quality and accuracy of the decisions, and the outcomes and consequences for the users and stakeholders. AI technologies can also pose reliability challenges, by being unpredictable, unstable, or unreliable, in their data, algorithms, and decisions, and causing the users and stakeholders to lose trust, confidence, or satisfaction in their performance, functionality, or usability. Security is the degree to which the data, algorithms, and decisions of AI technologies are protected from unauthorized access, use, or modification, by malicious actors or agents. AI technologies can pose security challenges, by being vulnerable, exposed, or compromised, in their data, algorithms, and decisions, and allowing the malicious actors or agents to access, use, or modify them for harmful purposes, such as fraud, theft, sabotage, or manipulation. AI technologies can also pose security challenges, by being hostile, adversarial, or malicious, in their data, algorithms, and decisions, and attacking, harming, or influencing the users and stakeholders for harmful purposes, such as coercion, deception, or exploitation[3].
Furthermore, AI technologies can create a paradox of perfection, where the use of AI gadgets and widgets can raise the standards and demands for police officers to perform flawlessly, while ignoring the human factors and limitations that may hinder their performance. The paradox of perfection can create unrealistic and unsustainable expectations and pressures for police officers, which can affect their wellness and retention. The paradox of perfection can also create ethical and social dilemmas for police officers, such as the loss of autonomy, agency, or responsibility, the conflict of values, norms, or principles, or the erosion of trust, legitimacy, or accountability[4].
This paper proposes a novel approach to use AI to support police officers' decision making, based on the concept of latent inference budgeting. Latent inference budgeting is a framework to identify and manage the human computational constraints and sub-optimal choices that officers may make in stressful and uncertain scenarios. By using AI to monitor, alert, and intervene in officers' decisions, latent inference budgeting can help officers achieve optimal outcomes, while reducing the risks of fatigue, stress, and burnout. This paper discusses the potential benefits, limitations, and implications of latent inference budgeting for police officers, and provides recommendations for future research and development.
Latent Inference Budgeting
Latent inference budgeting is a framework to identify and manage the human computational constraints and sub-optimal choices that officers may make in stressful and uncertain scenarios. Latent inference budgeting is based on the idea that human decision making is subject to a limited budget of cognitive resources, such as attention, memory, processing speed, and mental workload, that can be depleted or exhausted by various factors, such as stress, fatigue, emotion, or distraction. When the budget of cognitive resources is depleted or exhausted, human decision making can become sub-optimal, meaning that it deviates from the optimal or rational choice, that maximizes the expected utility or value, and minimizes the expected cost or risk. Sub-optimal choices can have negative impacts on the performance, wellness, and retention of police officers, as well as on the public trust, legitimacy, and accountability of the police force[5].
Latent inference budgeting aims to identify and manage the factors that can deplete or exhaust the budget of cognitive resources, and the sub-optimal choices that can result from it, by using AI to monitor, alert, and intervene in officers' decisions. Monitoring is the process of collecting, processing, and analyzing the data and information related to the officers' decisions, such as the context, the goals, the options, the outcomes, and the feedback. Monitoring can help to identify the factors that can deplete or exhaust the budget of cognitive resources, such as the complexity, uncertainty, or ambiguity of the situation, the stress, fatigue, or emotion of the officer, or the cognitive biases, heuristics, or limitations of the decision making. Monitoring can also help to identify the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, such as the errors, mistakes, or deviations from the optimal or rational choice. Alerting is the process of providing, presenting, and communicating the information and analysis related to the officers' decisions, such as the status, the risks, the alternatives, and the suggestions. Alerting can help to manage the factors that can deplete or exhaust the budget of cognitive resources, by providing the officers with relevant and timely information, analysis, and suggestions, that can help them to reduce the complexity, uncertainty, or ambiguity of the situation, to cope with the stress, fatigue, or emotion of the officer, or to overcome the cognitive biases, heuristics, or limitations of the decision making. Alerting can also help to manage the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, by providing the officers with feedback, guidance, and recommendations, that can help them to correct the errors, mistakes, or deviations from the optimal or rational choice. Intervening is the process of acting, influencing, or modifying the officers' decisions, such as the actions, the outcomes, or the consequences. Intervening can help to manage the factors that can deplete or exhaust the budget of cognitive resources, by acting, influencing, or modifying the situation, the officer, or the decision making, in order to reduce the complexity, uncertainty, or ambiguity of the situation, to cope with the stress, fatigue, or emotion of the officer, or to overcome the cognitive biases, heuristics, or limitations of the decision making. Intervening can also help to manage the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, by acting, influencing, or modifying the actions, the outcomes, or the consequences, in order to correct the errors, mistakes, or deviations from the optimal or rational choice[6].
Examples of Latent Inference Budgeting
The following example of how latent inference budgeting can be applied to support police officers' decision making, using AI technologies, such as VR, training models, and DSS with use of force could also be applied in vehicular pursuits as well as other scenarios.
Example 1: Use of force.
Use of force is one of the most critical and controversial decisions that police officers have to make, as it can have significant impacts on their own safety, well-being, and reputation, as well as on the public trust, legitimacy, and accountability of the police force. Use of force can also be one of the most stressful and uncertain decisions that police officers have to make, as it can involve complex and dynamic situations, where the information is incomplete, conflicting, or changing, and the time and resources are limited. Use of force can deplete or exhaust the budget of cognitive resources, and lead to sub-optimal choices, such as excessive, unnecessary, or inappropriate use of force, or insufficient, delayed, or ineffective use of force.
Latent inference budgeting can help to identify and manage the factors that can deplete or exhaust the budget of cognitive resources, and the sub-optimal choices that can result from it, by using AI to monitor, alert, and intervene in officers' use of force decisions. Monitoring: AI can monitor the officers' use of force decisions, by collecting, processing, and analyzing the data and information related to the situation, the officer, and the decision making, such as the body cam footage, radio traffic, the biometric data, the decision logs (CAD Data) and the RMS Police Report. Monitoring can help to identify the factors that can deplete or exhaust the budget of cognitive resources, such as the complexity, uncertainty, or ambiguity of the situation, the stress, fatigue, or emotion of the officer, or the cognitive biases, heuristics, or limitations of the decision making. Monitoring can also help to identify the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, such as the excessive, unnecessary, or inappropriate use of force, or the insufficient, delayed, or ineffective use of force.
Alerting: AI can alert the officers' use of force decisions, by providing, presenting, and communicating the information and analysis related to the situation, the officer, and the decision making, such as the status, the risks, the alternatives, and the suggestions. Alerting can help to manage the factors that can deplete or exhaust the budget of cognitive resources, by providing the officers with relevant and timely information, analysis, and suggestions, that can help them to reduce the complexity, uncertainty, or ambiguity of the situation, to cope with the stress, fatigue, or emotion of the officer, or to overcome the cognitive biases, heuristics, or limitations of the decision making. Alerting can also help to manage the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, by providing the officers with feedback, guidance, and recommendations, that can help them to correct the errors, mistakes, or deviations from the optimal or rational choice.
Intervening: AI can intervene in the officers' use of force decisions, by creating an interrupter that will be acting, influencing, or modifying the actions, the outcomes, or the consequences, such as terminating the use of force, activating the backup, or notifying the supervisor. Intervening can help to manage the factors that can deplete or exhaust the budget of cognitive resources, by acting, influencing, or modifying the situation, the officer, or the decision making, in order to reduce the complexity, uncertainty, or ambiguity of the situation, to cope with the stress, fatigue, or emotion of the officer, or to overcome the cognitive biases, heuristics, or limitations of the decision making. Intervening can also help to manage the sub-optimal choices that can result from the depletion or exhaustion of the budget of cognitive resources, by acting, influencing, or modifying the actions, the outcomes, or the consequences, in order to correct the errors, mistakes, or deviations from the optimal or rational choice.
Further Research
Latent inference budgeting is a novel and promising approach to improve the officers' use of force decisions, by using AI to monitor, alert, and intervene in their cognitive processes. However, there are still many challenges and limitations that need to be addressed and overcome, in order to make this approach feasible, effective, and ethical. Some of the possible directions for further research are:
- How to collect, process, and analyze the data and information related to the situation, the officer, and the decision making, in a reliable, accurate, and timely manner, without violating the privacy, security, or autonomy of the officers or the civilians involved?
- How to provide, present, and communicate the information and analysis related to the situation, the officer, and the decision making, in a clear, relevant, and helpful manner, without overwhelming, distracting, or influencing the officers or the civilians involved?
- How to create an interrupter that will act, influence, or modify the actions, the outcomes, or the consequences, in a safe, appropriate, and effective manner, without interfering, overriding, or contradicting the officers or the civilians involved?
- How to evaluate the impact and the outcome of the latent inference budgeting approach, in terms of the officers' use of force decisions, the officers' cognitive resources, the officers' well-being and performance, the civilians' rights and safety, and the public trust and perception?
One possible direction for further research is to explore how latent inference budgeting can be integrated with existing or emerging frameworks and programs that aim to improve the officers' use of force decisions and de-escalation skills, such as:
- PERF's ICAT (Integrating Communications, Assessment, and Tactics), which is a training program that provides officers with critical thinking skills, tactical options, and communication strategies when dealing with situations involving persons who are unarmed or armed with weapons other than firearms.
- George Mason's ABLE (Active Bystandership for Law Enforcement), which is a training program that teaches officers how to prevent or stop misconduct, reduce mistakes, and promote health and wellness among their peers, by providing them with practical skills and techniques of active bystandership.
- PAARI (Police Assisted Addiction and Recovery Initiative), which is a non-profit organization that partners with law enforcement agencies to provide pathways to treatment and recovery for people with substance use disorders, by diverting them from the criminal justice system and connecting them with community-based resources and support.
- CIT (Crisis Intervention Team), which is a model of collaboration between law enforcement, mental health professionals, and advocates, that aims to improve the response to people experiencing mental health crises, by providing officers with specialized training, consultation, and referral services.
These programs could benefit from latent inference budgeting, by using AI to support and supplement the officers' use of force decisions and de-escalation skills, in the following ways:
- AI could monitor the officers' cognitive resources, stress levels, and emotional states, as well as the situational factors, such as the level of threat, the type of weapon, the characteristics of the person, and the environmental conditions, and alert the officers or their supervisors when they are at risk of exceeding or exhausting their budget of cognitive resources, or when they need to apply the skills and techniques learned from the programs.
- AI could intervene in the officers' use of force decisions, by providing them with timely, relevant, and helpful information and analysis, based on the programs' frameworks and principles, that could help them to assess the situation, communicate effectively, choose the appropriate tactics, and consider the alternatives and consequences of their actions, without overwhelming, distracting, or influencing them.
- AI could also integrate with everyday police operations, such as dispatch, CAD, radio traffic monitoring, body cam sentiment analysis, and DSS in a Real Time Crime Center, and use the data and information from these sources to enhance the monitoring and intervention functions, as well as to dispatch additional units to respond, notify a dispatcher when to ask questions of an officer and what questions to ask, provide SMS messages to involved staff or dispatchers about suggestions they are not considering in the heat of the moment, or activate other resources or protocols that could assist the officers or the civilians involved.
By combining latent inference budgeting with these programs, we could create a comprehensive and holistic approach to improving the officers' use of force decisions and de-escalation skills, and ultimately, to enhancing the police-civilian interactions and relations.
Conclusion
In this white paper, I have introduced the concept of latent inference budgeting, which is a way of understanding and managing the officers' use of force decisions, based on the idea that they have a limited budget of cognitive resources that can be depleted or exhausted by various factors, leading to sub-optimal choices. I have also proposed a way of implementing latent inference budgeting, by using AI to monitor, alert, and intervene in the officers' use of force decisions, in order to help them to optimize their budget of cognitive resources and to make better choices. I have discussed the potential benefits and challenges of this approach and suggested some directions for further research. I believe that latent inference budgeting can be a valuable tool for enhancing the officers' use of force decisions, and ultimately, for improving the police community interactions and relations.
[1] Berthet, V. (2022)The Impact of Cognitive Biases on Professionals’ Decision-Making: A Review of Four Occupational Areas. Retrieved from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.802439/full
[2] Doyle, O. (2020) The Role of Decision Support Systems in the Criminal Justice System. Retrieved from https://scholar.colorado.edu/downloads/sj139285p
[3] Green, B. (2020) Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities. Retrieved from https://www.scu.edu/ethics/all-about-ethics/artificial-intelligence-and-ethics-sixteen-challenges-and-opportunities/
[4][4] University of California - Davis (2024) The Paradox of Perfection: Can AI Be Too Good To Use? Retrieved from https://scitechdaily.com/the-paradox-of-perfection-can-ai-be-too-good-to-use/
[5] Jacob, P., Gupta, A., Andreas, J. (2024) Modeling Boundedly Rational Agents With Latent Inference Budgets. Retrieved from https://openreview.net/pdf?id=W3VsHuga3j
[6] Jacob, P., Gupta, A., Andreas, J. (2024) Modeling Boundedly Rational Agents With Latent Inference Budgets. Retrieved from https://openreview.net/pdf?id=W3VsHuga3j