Racial Profiling: “What Does the Data Mean?”
A Practitioner’s Guide to Understanding Data Collection & Analysis

by Captain Ronald L. Davis, Region Vice President,
National Organization of Black Law Enforcement Executives (NOBLE).

Back to AELE Home Page


According to a Washington Post survey[1], 52% of African-American males polled believe they have been victims of racial profiling. Approximately 60% of Americans polled[2] believe racial profiling exists.  However, this was not always such a pervasive belief.  Over the past two years there has been intense national debate on whether racial profiling is a reality or a perception.

It was data collection, specifically the New Jersey and Maryland studies, that transformed racial profiling from what many labeled as a minority-community perception to what most people now accept as a national reality.   The debate over whether racial profiling exists is over.  How to end racial profiling is where the disagreement now exists.

In response to racial profiling, over 400 law enforcement agencies throughout the United States have implemented some form of traffic stop data collection. Fourteen states have passed racial profiling legislation mandating racial profiling policies, training, and data collection and analysis. United States Congressman John Conyers (D-Mich.) and Senators Russell Feingold (D-Wis.) and Hillary Clinton (D-NY) recently introduced the End Racial Profiling Act of 2001, which mandates data collection for law enforcement agencies receiving federal funds.

Many people believe data collection is necessary to end racial profiling.  Others believe data collection offers no practical value and simply validates what is already known. Opponents of data collection often cite the lack of credible analysis benchmarks as their primary basis of opposition. Consequently, the issue of data collection and analysis is the most controversial issue surrounding racial profiling.

Who’s right - who’s wrong?

Is data collection practical, a critical step to ending racial profiling or is it merely symbolic, necessary to appease the minority community in hopes of instilling public trust?  The answer lies in both positions.

On the one hand, data collection is practical because, “you cannot manage what you don’t measure.” Statistics enables one to make intelligent inferences from data.  Proper data collection, utilizing credible benchmarks, not only provides an organizational “snap shot” – a look at the organization at a specific point in time – it assists administrators in identifying institutional and systemic problems.

Data collection is also symbolic, a gesture of openness to the community and a commitment to equality. It translates to “we have nothing to hide” and represents the willingness of law enforcement to take an introspective look to prevent disparate treatment.  It also demonstrates a true commitment by law enforcement to address community needs and concerns.

Most people agree data collection is not a panacea or the final answer to racial profiling.  Proper data collection and analysis, however, is a critical first step in developing solutions to end racial profiling.  On May 3, 2001, the National Organization of Black Law Enforcement Executives (NOBLE) issued its national report[3]on racial profiling. The NOBLE report identified racial profiling as a symptom of “bias-based policing”, which is defined as:

The act (intentional or unintentional) of applying or incorporating personal, societal or organizational biases and/or stereotypes in decision-making, police actions or the administration of justice.

NOBLE believes bias-based policing is a systemic problem in the industry and requires strategic and comprehensive strategies to affect systematic reform.  Effective data collection and credible data analysis are necessary “tools” to reform the system. NOBLE supports racial profiling legislation that requires data collection and analysis, racial profiling training, and the implementation of racial profiling policies.

However, this position is not shared by all law enforcement organizations.  Many law enforcement organizations and officials believe the decision to collect data should be that of the local police chief and therefore oppose any legislation that mandates data collection. This is an understandable position, but it does not taken into account the industry’s general lack of knowledge and understanding of racial profiling.

60/60 Dichotomy

According to a survey conducted by the Police Executive Research Forum (PERF), over 60% of the police chiefs surveyed did not believe racial profiling occurred in their jurisdiction; compared to 60% of Americans surveyed (1999 Gallup Poll) who believe racial profiling does exist.  Agencies are not likely to exhaust resources and funding on a problem that the administration believes does not exist.

The 60/60 dichotomy is primarily based on how racial profiling is defined.  Many police administrators define racial profiling as the use of race as the “sole” basis for a stop.  By using the word “sole”, racial profiling is defined as intentional discrimination and racism, which makes it aberrant behavior, that is not necessarily widespread.

If you remove the word “sole” and identify racial profiling as a symptom of bias-based policing, then it is definitely widespread, because everyone has biases. Our biases, however, do not make us racists; they make us human.

Data Collection and Analysis

It is time to end the debate over racial profiling and data collection and focus our efforts on establishing credible data-analysis benchmarks.  We must identify how to analyze the data and define how it can be used in meaningful ways. The intent of this article is to provide law enforcement and the community a practitioner’s perspective on data collection and analysis, identify practical considerations in establishing benchmarks, and provide “plain language” steps to effective data collection and analysis.

What data?

Over the past two years valuable lessons in data collection have been learned from pioneer law enforcement agencies such as Oakland and San Jose, California.  At a minimum, the following data should be collected (this does not include additional fields which may be required based on local variables, which will be discussed later in the article).

What race are you?

Another controversial issue surrounding data collection is whether officers should inquire about the race of drivers or should drivers self-identify.   Race and ethnicity should be based on officer perceptions and experience. Officers should not inquire into race or ethnicity. Imagine being asked, “What race are you?” Questioning a person about their race or ethnicity may create the perception of bias and result in increased tension and animosity against the officer.

Many believe the data is not credible unless the person identifies their race or ethnicity.  This would be true if the data were being used for official demographics reports such as the Census.  In the case of racial profiling, however, it is not the stop percentage of a specified race or ethnicity that needs to be captured; but the percentage of stops police conduct on persons believed to be a certain race.

Racial profiling is any police action that is based on the race, ethnicity or national origin of a person rather than the behavior of an individual[4].  In most cases, the application of race, ethnicity or national origin is based on the officer’s opinion of race or ethnicity.  It is therefore crucial that we capture what race the officer believes.  If an officer observes what he believes to be a black male driving a vehicle and stops the vehicle or finds a pretext to stop the vehicle based on driver’s race, it is irrelevant if the driver turns out to be Hispanic, Puerto Rican, Caucasian, etc.  What is important is that the officer believed the driver was black.

Involving Stakeholders

The key to successful data collection lies in the process as much as the actual result, or as Ralph Waldo Emerson stated, “the process is the product.”  When it comes to data collection, the most effective agencies have formed local task forces that involve all stakeholders in developing local data collection programs and in identifying local variables in establishing data analysis benchmarks.

A task force should be comprised of representatives from civil-rights and community-based organizations; rank and file, supervisory and command officers; police union representatives and minority law enforcement organizations at the local level.

The primary purpose of the task force is to develop data collection and analysis processes that fit the local agency and community.  The task force should also identify local variables (necessary in establishing credible benchmarks) and market the department’s data collection and analysis efforts, which establishes credibility in the process and instills public trust.  The task force can also assist in developing racial profiling policies and training for local law enforcement agencies.

Data analysis

Before deciding what data to collect, we must define the purpose of data collection, or as Steven Covey (Seven Habits of Effective Leader) has said, “Start with an end in mind.”

The “end” or goal of data collection is comprehensive analysis and practical application.  Analyze the data to the extent the information can be used to benefit law enforcement and the community.  For data collection to be meaningful, it must be useful.  More importantly, it must be accurate. Improper data collection with inaccurate analysis is irresponsible, contributes to negative perceptions in the community, negative perceptions of law enforcement, and results in an overall lack of confidence in the process.


San Jose Police Chief William Lansdowne is considered by many as the “father” of traffic-stop data collection.  The San Jose Police Department was one of the first law enforcement agencies in the country to voluntarily collect traffic stop data. Chief Lansdowne based his decision on concerns expressed by the minority community.

Armed with good intentions and an outstanding staff, the San Jose Police Department developed and implemented a data collection program that served as a national model. Chief Lansdowne was applauded by the ACLU, the NAACP and many other civil-rights and community-based organizations for his courage and commitment. Then Chief Lansdowne released his first data-collection report[5], which showed a slight disparity (less than 10%), between traffic stops of Hispanics compared to the percentage of Hispanics residing in the City.

The San Jose Police Department came under immediate criticism from a few civil-rights and community-based organizations because the stop statistics did not “perfectly” match the aggregate demographics of the City. This type of comparison – vehicle stop data against aggregate census data - became the national trend.

The 1990 Census became the sole data analysis benchmark for many people and organizations. Racial profiling and discrimination accusations were launched against police agencies based on this comparison. Not only is this practice inaccurate, it is outright irresponsible, and actually contributes to negative perceptions in the community.

As a result of San Jose’s initial experience, many police administrators are apprehensive about data collection in fear they too will be accused of racial profiling and racism based solely on statistical disparities. Effective benchmarking must incorporate the complexities of effective policing, as well as societal and cultural disparities.

Baseline Comparison Data (BCD)

The census does not serve as an effective data analysis benchmark or baseline. The census purports the percentage of citizens residing in a city; it does not provide the number or demographics of the actual drivers or traffic violators, which by most accounts is the most effective baseline.

The census does not provide the number of people that visit or drive through a jurisdiction – commonly referred to as the “daytime” population. The census is also known to have high “miss” rates (number of people not counted) in the minority community, and like all statistical studies, the census also has an error rate.

To the extent possible, police agencies should utilize professional researchers to conduct statistical samplings and surveys in order to determine violator-demographics and daytime population. In the absence of statistical sampling, agencies that use current (2000) census data must narrow the data to persons of driving age and incorporate all relevant local variables.

Aggregate Data

Aggregate percentages do not reflect racial or ethnic population density.  Many neighborhoods are predominately one race or ethnicity.  Consequently, the number of stops conducted in these neighborhoods skew the aggregate census data. The number of officers assigned to specific neighborhoods can also skew the aggregate data.

Most police agencies divide their city into beats, districts, precincts or areas and deploy staff based on population density, reported crime and calls for service. Most high-crime areas have more officers assigned. If higher percentages of officers are assigned to areas that are predominately one race or ethnicity, the number of stops will be higher for that race or ethnicity.

These disparities do not result from law enforcement. These disparities are societal, based on many factors ranging from historic racism and discrimination to education and socio-economic conditions.  The police cannot be held accountable for societal-based disparities, unless these disparities are used to form biases and stereotypes that are then applied in policing.

Oakland, California

According to the 2000 U.S. Census, the City of Oakland is approximately 36% African-American (black), 23% Caucasian (white) and 26% Hispanic.  There are neighborhoods, however, in which 60 - 75% of the population is black.  Depending on the number of stops conducted in this area, the overall percentages may skew the aggregate data.

The City of Oakland is divided into three Police Areas.  The demographics differ in each police area, which differs from the aggregate census demographics.  The 36% black population is not evenly or proportionately distributed throughout the city.  This too may skew the aggregate data.

Oakland, like most urban cities, is divided along historic racial-geographic lines -- areas in which the majority of the population is of one race or ethnicity.  In the early 20’s and 30’s they were called “ghettos”.  Today, most cities have areas such as “China Town”, “Little Italy,” etc.

In the City of Oakland, approximately 85% of the black population resides in an area referred to as the “flatlands” and 85% of the white population resides in an area known as the “hills.”  The “flatlands” is high-crime, low income. The “hills” are low crime, affluent neighborhoods.

The “flatlands” and “hills” are not police designations or official police precincts.  They are historic racial-geographic boundaries (named by the community) that encompass parts of all three precincts.   Although 85% of Oakland’s black population resides in the flatlands, it represents only 60% of the aggregate flatlands’ demographics.  Although, 85% of the white population lives in the hills, it too represents only 60% of the hill demographics.

Approximately 85% of crime reported in Oakland is committed in the “flatlands”, which is 60% black (representing 85% of the total black population).  Approximately 85% of officers assigned to each watch are assigned to the “flatlands.”  The “flatlands” accounts for 85% of the calls-for-service.  Approximately 100 officers are assigned to a watch, 85 in the flatlands” and 15 in the “hills.”

Establishing Benchmarks

Before comparing the data, one must first establish benchmarks.  One method is to identify the “perfect” data set and statistical match.  In other words, what would the statistics reflect if all of the stops matched perfectly with the demographics of the areas in which they are conducted?  The key to this methodology is to determine the comparison area; this in turn determines what baseline data will be used.  Do you use the aggregate census data, the precinct demographics, or the racial-geographic statistics?

In the case of Oakland, census data would not be a good baseline. As stated earlier, Oakland is divided by “natural” racial-geographic boundaries and sectioned into three precincts. Precinct demographics would also be ineffective as racial-geographic “pockets” encompass all three areas (precincts) and skew both the aggregate and precinct demographics. The best baseline for Oakland is racial-geographic.

Next, one must identify staffing deployments relative to the racial-geographic boundaries.  In short, how many officers are assigned to the “hills,” how many officers are assigned to the “flatlands?” As part of the “perfect” data model, each officer stops a consistent number of people whose demographics match perfectly with the areas assigned; 60% black in the flatlands, 40% black in the hills; 60% white in the hills, 40% white in the flatlands.

The chart below outlines how a perfect data set model establishes a racial-geographic baseline for the City of Oakland.



# of officers


# of stops


# of Blacks

in Area

Total  # of

Blacks stopped


“Model” Results

·         Each officer conducts 1000 stops that perfectly match the demographics of the areas assigned - flatlands and hills.

·         According to racial-geographic percentages, 5700 out of 10,000 stops should be black, which represents 57% of the stops (the actual stop percentage is 48%[6])

·         57% represents a reasonable stop-benchmark for the City of Oakland

·         According to the 2000 Census, Oakland is 36% black

·         Consequently, there could be a 21% disparity (actual disparity is 12%)

Upon seeing this 21% disparity, many would immediately accuse the police of racial profiling.  This is not only inaccurate, it is outright irresponsible.  The 57% actually represents a reasonable stop-benchmark for the City of Oakland, taking into account population density and staffing deployments.

The 21% disparity is the result of 85% of reported crime being committed in an area (flatlands) that is 60% black and from where 85% of calls for service are received.  Consequently, 85% of the officers are deployed in this area.  More officers, more stops.

The 21% disparity represents societal-based disparities, not police-based disparities. Stop-percentages greater than 57% may indicate police disparities or racial profiling, depending on circumstances. Although 85% of reported crime is committed in an area that is 60% black, this does not mean 60% of blacks commit crime. Nor does it mean minorities are more likely to commit crime.

Law enforcement cannot and must not use this information as its model. To do so would create biases and stereotype minorities.  This information can and should be used to effectively deploy staff in areas where they are most needed.  This information can also be used to develop strategies to improve the quality of life in high-crime neighborhoods

The Fallacy Theory

Many chiefs and managers have stated, “if minorities are committing more crime, I expect my officers to stop and search more minorities”. This position may sound reasonable at face value, but it actually contributes to disparate stops, sets an organizational tone that supports profiling, and in most cases is based on what I call the “fallacy theory”, which states:

“If the majority of crime is committed by blacks, than the majority of blacks commit crime.”

Even if we accept (for the sake of argument only) that the majority of reported crimes in certain areas are committed by minorities, it is most likely that the percentage of people committing crimes represent less than 10% of that minority group.  It is unreasonable to cast suspicion on an entire group or class of people, based on the actions of a few.

Traffic Stops & Crime Reduction

Most law enforcement officials believe traffic stops are effective in “catching bad guys”, reducing and preventing crime. However, the 1972 Kansas City Experiment proved random patrols and stops do not necessarily prevent or deter crime.

Recent traffic stop data released from agencies across the country, including Oakland, reveals that approximately 3 to 10 percent of traffic stops result in arrests; the majority of those arrests are traffic-related violations or warrants, not the criminal offenses that are used to justify the statistical disparities.

In short, there is no empirical data to suggest traffic stops reduce or prevent crime; therefore it is not reasonable to expect officers to conduct traffic stops based on the demographics or profiles of known criminals. To do so results in race and/or ethnicity being used as a “predictor” of crime, versus a “descriptor” of a criminal.  This constitutes racial profiling and is a violation of the 14th amendment of the constitution.

Searches & Crime

Although there is not necessarily a nexus between car stops and crime reduction, there is a direct nexus between searches, post-stop activities and crime.  A search cannot be conducted unless there is probable cause to believe a person has committed a crime.  Therefore, search statistics and post-stop activities must be analyzed using different benchmarks and baseline comparison data.

The most effective baseline data for searches and post-stop activities is “reported crime,” this, however, does not mean suspect demographics or profiles, it means agencies should plot reported crime by police-geographic or race-geographic bounders. The number of searches being conducted should be consistent the percentage of crime in that area and consistent to the demographics of that area.

For example, 85% of reported crime in Oakland occurs in an area known as the “flatland”, which is 60% black.  It is therefore reasonable for 85% of vehicle searches to be conducted in the area where there is reported crime.  It is not reasonable to expect officers to conduct the majority of criminal searches in areas where reported crime is low.

Consequently, 85% of the total searches will most likely be conducted in an area that is 60% black.  This may be subdivided to smaller geographic areas based on search density relative to police geographic boundaries. Agencies should plot where the searches are conducted and compare search density to reported crime density and local demographics

Oakland, California

Oakland is 36% African-American, however, 65% of searches[7]conducted were African-American. This appears to be a 29% disparity.  It is crucial to identify whether the 29% disparity is police or societal-based, or both.  First, the baseline data and benchmark must be identified before comparing search data.  The “perfect” data set is once again an effective method for establishing benchmarks.

In post-stop categories, such as searches, the census, precinct and/or racial-geographic demographics may not be effective benchmarks.  Reported crime relative to the searches is a more effective benchmark.  In other words, the percentage of searches conducted by officers should be proportionate to the percentages of crime committed in each area.

In the case of Oakland, racial-geographic boundaries were used as the stop-benchmark.  That same benchmark must now serve as the basis to overlay reported crime and searches.  If 85% of reported crime is in “flatlands”, it is reasonable to expect 85% of searches to be in the flatlands.  Consequently, 85% of searches will be conducted in an area that is 60% black and the other 15% of the searches will be conducted in an area that is 40% black.

The actual Oakland search data is listed below.  The perfect data set model displays the search benchmark used by the City of Oakland.


·         There were a total of 2229 searches

·         90% or 2006 searches were conducted in the flatlands

·         10% or 223 searches were conducted in the hills

·         85% of reported crime occurs in the flatlands

·         15% of reported crime occurs in the hills

·         The flatland is 60% black, which represents 85% of the black population

·         The hills are 15% black, which represents 40% of the black populations

Model Results

·         1203 searches in the flatlands should be black

·         89 searches in the hills should be black

·         A total of 1301 searches out of 2229 total searches or 59% should be black

·         This means Oakland is 7% over its benchmark.

It should be noted that the 7% variance does not necessarily constitute police-based disparities or racial profiling.  There are other variables to be considered: A total of 29% percent of all searches conducted were based on probation and parole status; 79% of these searches were African-American.

This local variable, as well as others, must be considered in establishing a benchmark. In addition, most statistical studies factor in an error rate, which must be considered when analyzing any data set.

Probation & Parole

There are approximately 11,000 people on probation and parole in the City of Oakland. Approximately 700 parolees are wanted for some type of violation, commonly referred to as “at-large" on a daily basis[9].  It is estimated that over 50% of reported crime in Oakland is committed by persons on probation and parole.  Recidivism rates for people on probation and parole exceeds 70%.

In response to these staggering statistics, the Oakland Police Department formed a Police and Corrections Team (PACT) that targets repeat offenders and provides education, training and job placement programs, as well as aggressive enforcement of parole and probation conditions.  The program results in hundreds of stops and searches of known persons on probation and parole.

It should be noted that in the State of California many persons on probation and parole are subject to warrantless searches and searches without probable cause as conditions of their probation and parole. This program creates variables that can skew the benchmark and provide “false positives”, which may be viewed as disparate and even discriminatory practices.

In the case of Oakland, 29% percent of all searches were on persons on probation and parole, 79% of whom were African-American. The Oakland statistics raise several questions: First, what are the demographics of persons on probation and parole in the City of Oakland?  Where do the majority of people on probation and parole reside – the flatlands or hills?  The answers to these questions helps determine to what extent the information contributes to the 7% variance.  It also determines whether the 7% disparity is societal-based or police-driven.

If 79% percent of people on probation and parole in Oakland are African-American, this will be consistent with the search data and reflect a societal-based disparity.  It would also be expected that approximately 85% of blacks on probation and parole reside in the flatlands, which is 60% African-American.  This, too, is a societal-based disparity that skews the aggregate search data.

Repeat Offenders

What is the true number of stops?  If the stop-data reflects 100 Hispanics stopped, does it reflect 100 Hispanics stopped once or 10 Hispanics stopped 10 times?  The answer to this question is another critical factor in data analysis.

One of the basic premises of community-oriented policing is working closely with the community to identify criminals.  Successful officers know who the criminals are, whether they are drug dealers, burglars or auto thieves.  As stated earlier, many of these known criminals are on probation and/or parole.

Officers may stop and detain persons on probation and parole, or known drug dealers multiple times during a data-collection period.  The intent of the stops may very.  Some cases may be based on a reasonable suspicion that a crime is in progress. Other cases may be based on probation and parole status, including invoking search clauses.  Many cases may be based on a pre-textural traffic stop to “dig” further into suspicious behavior of known criminals.  In any case, it is crucial that repeat stops be captured as part of data collection.

The category “repeat offender” can be added to data collection forms, with instructions for officers to note if they have previously stopped the person (within a specified time period). The time period should coincide within the data collection and analysis time frame parameters.

Special Programs

Another variable to consider in data analysis is special enforcement programs, e.g., drunk driving checkpoints, seatbelt enforcement and homicide suppression units can result in disparate stop statistics, depending on the purpose and location of the program.

In many cases, these disparities will be societal-based.  In other cases, the disparities will be a result of bias and stereotypes, or police-driven.  In either case, administrators must have the ability to measure the effectiveness of his or her program and its cost-benefits.

Case Study

A recent Meharry Medical Center study found that African-Americans have seatbelt non-compliance rates three times higher than any other race or ethnicity.  Consequently, African-American youth are victims of traffic fatalities at similarly disparate rates; this statistical disparity is cultural or societal-based -- not police-based.

This disparity is relevant and even useful to law enforcement if viewed from the proper context. The data does not imply law enforcement is justified in stopping more blacks on the chance they are not wearing seatbelts (that is racial profiling), but it does imply that education, prevention and enforcement programs should be focused in the African-American community to increase seatbelt compliance and decrease traffic fatalities.

As mentioned earlier, there is no direct link between car stops and crime reduction. There is, however, a direct link between traffic enforcement and traffic fatalities.  Therefore, aggressive seatbelt enforcement programs may be necessary in communities that suffer from high-fatality rates, regardless of the race or ethnicity of the group not wearing seatbelts.

There will exist a fear that the inevitable skewed stop-statistics from an aggressive seatbelt campaign will be viewed as racial profiling, subjecting the agency to intense scrutiny and unwarranted attacks. While this might be true in many cases, the agency can justify stop-disparities based on data collection (both traffic fatalities and traffic stops) and an accurate analysis that considers all locally relevant variables.  Therefore, the analysis of data should be as readily available as the raw data itself.

The key to an aggressive enforcement program is to first identify clear goals and objectives, such as an increase in seatbelt compliance and a reduction in traffic fatalities through enforcement. For this program, the “hit” rate – the rate at which the desired outcome is achieved - will be the number of stops in which people are not wearing their seatbelts, not arrests or narcotics seizures. If an officer stops everyone not wearing their seatbelt, the hit rate is 100%.

More than likely, the stop rate will be disparate, but this is based on societal-based disparities, not police-based disparities.  The other key will be post-stop activities.  Although more minorities may be stopped, the length of the detention and the scope of the detention should be statistically the same as non-minorities, unless there are other local variables. These variables, however, should be validated through a “hit” rate that would justify the variances.  In other words, good police work not poor guesswork.

To ensure that the statistics are not analyzed inappropriately and inaccurately, agencies that implement these types of programs must consult with local community groups prior to starting the program to discuss potential outcomes.  Agencies should also designate these stops as special enforcement projects when capturing stop-data.

These stops should be analyzed both separate from the aggregate data and part of the aggregate data.  By establishing clear goals and objectives before implementing the program, officers will understand the purpose of the program and understand how to define “success”.  This will reduce guesswork, probing and “stats” chasing, which can result in police-based disparities.

Administrators should provide officers with sufficient information to make intelligent decisions and hold managers and supervisors accountable for achieving organizational goals. This should prevent programs of this nature from turning into organizational nightmares.

Many critics insist that focusing an enforcement program within the minority community is profiling and it is wrong.  But it is not racial profiling – it is responsive policing. Failure to respond to high fatality rates is wrong.  Enforcement programs designed to increase seatbelt compliance and save lives should be focused in those communities where there is the most need.

In this case, the program should be focused in the African-American community. Law enforcement does not need to apologize for enforcing the law or for targeting criminal behavior, or for focusing on behavior that threatens public safety (such as seatbelt violations).

Community Responsibilities

Civil rights and community-based organizations have a responsibility for obtaining an “expert” level of knowledge and understanding about racial profiling, bias-based policing and data collection and analysis before launching discrimination allegations.

It does a disservice to the community for reputable organizations, whether civil-rights or community-based, to accuse law enforcement of racism and/or discrimination based on statistical disparities or the implementation of non-bias traffic enforcement programs.  Although the police have a responsibility to work with the community, the community shares the same responsibility to work with law enforcement. This partnership provides mutual respect and a better understanding of community perceptions and the complexities of policing in a democratic society.

Cooperating with the police does not dilute community activism or citizen oversight. To the contrary, it empowers communities to hold law enforcement accountable. Communities must speak-out against racism, discrimination, and biases.  They must also speak-up in support of law enforcement when we “get it right.”  Otherwise, the voice of the community becomes nothing more than the voice of the critic.  Dissent is necessary to hold government accountable, but it must be balanced with support.

When does the data indicate racial profiling?

Racial profiling and bias-based policing is a systemic problem. No single database can determine whether it exists or to what extent.  Stop data provides critical information to assess organizational behavior.  It provides a necessary piece of the problem-solving pie, determining what systems are influenced by bias or what systems are resulting in disparities.  Many of these disparities may be societal-based; others may not.

Police Administrators must know the answers as to why disparities exist.  They must also have the courage to accept if they are police based. Post-stop data can assist with determining managerial effectiveness by providing information on the questions: How effective are your enforcement programs?  Are they resulting in disparate treatment? If so, why?

Does racial profiling exist in my agency?  This question cannot be answered in isolation or by data alone.

To answer this question, additional factors must be considered:

·         Stop-data (incorporating local-variables)

·         Community perceptions

·         Citizen complaints

·         Officer misconduct allegations

·         Policies & Practices

·         Special Programs

·         Mission-Vision- Value Statements

·         Training

·         Officer feedback

What value does data collection provide?

Statistics enables one to make intelligent inferences from the collected data, making it possible to identify organizational bias, either in operational systems or in functional programs. Statistical disparities do not automatically constitute discrimination, racial profiling or even bias-based policing.  However, the degree of the disparity, the area or categories of disparities, and context in which disparities exist may signal “bias”. Single data-set disparities in and of itself do not have much value.  However, when they are combined with topical disparities, the data may indicate bias and identify what systems and/or programs in operation are resulting in disparate treatment.


The chart listed below provides a theoretical example of when statistical disparities may signal bias.

Search Basis
Search Results











Low Yields


Low Arrests






Low Discretion






Low Yields


Low Arrests

There may be reasons for the disparities outlined in this chart, but absent these reasons, the disparities in this chart indicate bias.  The fact that the majority of the stops are mechanical or high-discretion violations coupled with high or disparate search rates indicate exploratory stops and searches. The fact that a large percentage of searches are consensual, with low yields – also known as “hit rates” – indicates the exploratory searches may be based primarily on biases and stereotypes.

In this theoretical case, the administration may want to identify what criteria officers are using to determine consent searches.  Although officers may legally conduct consent searches on anyone, this data reveals bias and indicates ineffectiveness. This theoretical agency should consider implementing policies that outline consent search protocol and procedures and enumerate supervisory and managerial responsibilities.

The next chart shows a theoretical case in which statistical disparities are explainable and reasonable.

Search Basis
Search Results





High Discretion






Low Yields

Low Arrests

High Citation






Low Discretion






Low Yields

Low Arrests

Low Citation

In this case, there are disparities in actual stops and specific types of violations.  Post-stop data such as searches and hit-rates are relatively the same. Citation rates are somewhat higher for blacks than whites. At face value, many would immediately point to the high-stop rates for blacks versus whites, especially if the percentage variances are more than 5% higher than the census data.

Locally relevant variables may explain this disparity.  Special programs (such as seatbelt enforcement) may be conducted in minority neighborhoods based on need, community concerns and even at a community’s request.  The stop disparity thus becomes societal-based, not police-driven.  However, the post-stop data is statistically the same, which indicates the program(s) is not influenced by bias.  Unless the agency has established an effective partnership with the community, residents may immediately assume racial profiling, causing the police to get defensive and state, “not in my agency.”

Agencies must work with the community to identify why these disparities exists.  Are they police or societal based?  What is your response to either potential outcome?  If an agency were going to implement a program that is going to be skewed by cultural or societal-based statistical disparities, it is imperative to explain the program, its goals and the expected outcomes before implementation.

There are other combinations of factors, variables and percentages that can provide valuable insight into operational effectiveness and disparate outcomes.  Below are a few variables that should be considered when establishing baselines and benchmarks for analyzing data.

·         1990 versus 2000 Census

·         Driving Age Population

·         Day-Time Population

·         Major Thoroughfares

·         Violator Population

·         Area/Precinct Demographics

·         Race-geographic lines

·         Population Density

·         Staff Deployment

·         Special Projects/Assignments

·         Probation & Parole

·         Repeat Offenders

The key to analyzing and applying the data is to work closely with all stakeholders during data-collection development, implementation, and analysis.

Officer Identification

Rank and file officers are concerned that an inability to establish credible benchmarks and analyze the data accurately will result in officers being falsely labeled as racists.  There are also concerns that the information will be used to file frivolous lawsuits against officers.

These concerns are understandable. Although there is definite value in officer identification, there is also great potential for misuse and abuse, which would further deteriorate police-community relations, impact organizational morale and compromise the integrity of data collection programs.  Agencies initiating data collection programs should start with anonymous data until the data collection systems are operational and tested, and the community and press have been given information and education about data collection and analysis.

If the community understands benchmarks and the variables that skew aggregate data, as well as the variables necessary to establish benchmarks, there is less likelihood the information will be misinterpreted and misused.

This educational process can be accomplished through local racial profiling task forces and advisory committees. Officer identification can then be incorporated into later iterations of data collection.

The decision to identify officers in data collection should be left to local agencies based on local factors. NOBLE recommends, however, that data collection programs ultimately include officer identification, which is then integrated into a comprehensive early warning system that tracks various indicators of officer behavior.

Utilized in this context, the data will be one of many factors used to determine whether officers are engaged in inappropriate behavior or whether their behavior suggests there are problems in need of immediate intervention. The decision to identify officers should be based, in part, on the communities understanding and knowledge of racial profiling, and data collection and analysis.

Administrators must ensure officer information is “confidential” and, to the extent possible, afforded the same protections as personnel files.  This will vary in jurisdictions based on government codes, public information laws, and civil service rules.

Officer-Data Analysis

The same principles for organizational analysis applies to individual officer analysis. There are numerous variables that must be considered when establishing benchmarks. Aggregate census and even the precinct demographics may not be effective.  Officers patrol smaller geographic areas.  The demographics of those areas may drastically vary from the aggregate census data.

Depending on the location of the officer’s beat or area assigned, there may be major thoroughfares or areas with high daytime populations.  This too must be factored.  Officer search and post stop data cannot be compared to departmental averages.  Each beat and officer will require tailored benchmarks, based on local variables.

Purpose and Benefits of Data Collection

Identifying officer characteristics, such as age, length of service, race and gender, may also provide valuable information for conducting organizational assessments.  There are hundreds of data fields that can be beneficial.  Data collection, however, must be practical.  The determining factor is based on what administrators define as the goal of data collection.

If data collection is designed to assess organizational behavior and effectiveness, officer identification is valuable, but not necessary. If an agency is attempting to identify organizational systems and operational programs that may be influenced by bias, officer identification is valuable, but again, not necessary.

If an agency is attempting to identify “racist” officers or officers engaging in racial profiling, then officer identification is necessary; although, no one data set will accomplish this goal. If an officer is engaged in intentional discriminatory practices, it is probably not in isolation.  A racist officer is not going to limit his or her actions to car stops. 

The primary purpose of effective early warning systems is to track citizen complaints, rude conduct, excessive force and other indicators of misconduct.  There is no replacement, however, for effective supervisory and managerial leadership and oversight.

Understanding how to analyze data answers the question: “Why collect data?”

Law enforcement officials must develop a better understanding of data collection and analysis in order to truly recognize its value and its necessity.  It is strongly recommended that agencies consult with professional researchers or statisticians for both accuracy and validity.

Ten Key Points

1.       There is no need to fear data collection and analysis.

2.       Data is information, information is knowledge and knowledge is power.

3.       Stop data is not perfect – but it is better than no data.

4.       Effective baseline comparison data and benchmarks can be determined and established.

5.       The key to data analysis is interpretation: what does the data mean to you, the organization and the community?

6.       Involve all stakeholders in all aspects of data collection and analysis.

7.       Data collection by itself will not answer the question: “Does racial profiling exist?”

8.       Data collection is not the solution to racial profiling. It is a critical tool in developing solutions and measuring managerial effectiveness.  “You cannot manage what you do not measure.”

9.       Data collection can identify bias in operational systems and functional programs, and be used to reduce disparate treatment and racial tension.

10.   False and inaccurate allegations of racial profiling and discrimination based on inaccurate analysis are extremely harmful to the community and to the law enforcement profession.

Recommendations for Effective Data Collection & Analysis

1.       Form a local advisory group or task force comprised of all stakeholders, including police, community, civil rights, police unions or associations, professional researchers and/or academics.

2.       Provide training to the advisory group so they may obtain an expert level of knowledge and understanding of racial profiling, bias-based policing and the complexities of data collection and analysis. Do not assume task force members understand the issues.

3.       Utilize the task force to define racial profiling and bias-based policing in an agency policy that is in accordance with applicable local ordinance or state law and CALEA standards.

4.       Determine the goal(s) and desired outcomes of data collection before designing the system.  Engage the community in this process through marketing strategies, such as Town Hall meetings.

5.       Identify the locally relevant variables that may skew aggregate data and list all the relevant variables that are necessary in establishing benchmarks. This process must be completed prior to identifying what data should be collected.

6.       Identify baseline comparison data and establish benchmarks.

7.       Identify what data should be collected.  Professional research has been conducted in this area.  It is not necessary to reinvent the wheel, but it is necessary to identify locally based variables, as they can vary between agencies and jurisdictions.

8.       Identify “best practices” and develop data collection methodologies that fits the organization, the community, and the budget.

9.       Train officers and the community on racial profiling and bias-based policing; the new policy, the agencies data collection program – its purposes, value and expected outcome (not statistical), and their role to ensure success.

10.   Collect & analyze the data, and report the findings and recommendations.

These steps are not all-inclusive, but are designed to provide a guide for establishing effective data collection and analysis programs.


There are many considerations in establishing benchmarks.  This report provides a practitioner’s perspective and a few basic, non-scientific principles and examples of how data collection can identify organizational bias, improve managerial effectiveness and improve community relations. The examples provided in this report are not comprehensive, all-inclusive, nor intended to be a complete guide on data collection and analysis.

The Center for Naval Analysis Corporation (CNAC) and the National Organization of Black Law Enforcement Executives (NOBLE) are working on a data collection benchmarking project funded by the United States Department of Justice, Office of Community-Oriented Policing.  The technical assistance guide will be released in January 2002.  This report will provide a more comprehensive and academic perspective on data analysis and benchmarking.

Written by,

Captain Ronald L. Davis, Region Vice President


Copyright © December 2001 by Ronald L. Davis

All rights reserved.

[1] Washington Post (June 21, 2001)

[2] 1999 Gallup Poll

[3] Racial Profiling: A Symptom of Bias-Based Policing

[4] Professor Deborah Ramirez, Northeastern University

[5] San Jose Traffic Stop Data Report, December 2000

[6] Oakland Data Collection Report

[7] Oakland Data Collection Report

[8] Oakland Data Collection Report

[9] California Department of Corrections & Alameda Co. Probation Dept.

Back to AELE Home Page