Get Custom homework writing help and achieve A+ grades!

Custom writing help for your homework, Academic Paper and Assignments from Academic writers all over the world at Tutorsonspot round the clock.

Our promises:

  • Custom homework writing help
  • Plagiarism Free Solutions Guaranteed!
  • A+ Grade Guaranteed!
  • Privacy guaranteed!
  • Best prices guaranteed!
  • Timely delivery guaranteed!
  • Hundreds of Qualified Writers 24/7

Crime Analysis Products

Open Homework Posted by: papadok01 Posted on: 12/09/2020 Deadline: 2 Day

5 writers want to do this homework:

Quick Engineering Guru
Assignment Hub
Professional Accountant.
Peter O.
Fardeen W.


After reading Chapter 7 in the textbook, write an essay of 1,500-2,000 words explaining the types of crime analysis products created for Operational, Management, and Command personnel.

  1. Contrast the different types of information and activities needed to create the product.
  2. What are the key differences between the products?
  3. How would each one of these stakeholders use the product?

Be sure to cite three to five relevant scholarly sources in support of your content. Use only sources found at government websites, or those provided in Topic Materials.

Open Homework

Project ID 717171
Category Business & Management
Subject Organizational Behavior
Level M-Phil
Deadline 2 Day
Budget $50-70 (2-5 Pages/ Short Assignment) Approx.
Required Skills Essay Writing
Type Open For Bidding

How it Works (Full Video)

How to Hire an Academic Writer

How to Post Homework Question

Order Your Homework Today!

We have over 1500 academic writers ready and waiting to help you achieve academic success

Private and Confidential

Yours all information is private and confidential; it is not shared with any other party. So, no one will know that you have taken help for your Academic paper from us.

5 writers want to do this homework:

Quick Engineering Guru


Quick Engineering Guru


Greetings! I am the professional electrical, telecom engineer, rich experience in QPSK, OFDM, FFT, such signal processing concetps with matlab, I can satisfy you definitely. more in chat.thanks.

Offer: $55

Assignment Hub


Assignment Hub


I feel, I am the best option for you to fulfill this project with 100% perfection. I am working in this industry since 2014 and I have served more than 1200 clients with a full amount of satisfaction.

Offer: $55

Professional Accountant.


Professional Accountant.


Hi! It is good to see your project and being a reputed & highest rated freelance writer on this website, you can be assured of quality work! I am here to provide you with completely non-plagiarised work

Offer: $55

Peter O.


Peter O.


Hi, I am an MS Research Scholar, and after carefully reading the description of the project I can confidently say that I am a suitable candidate, equipped with right skills, to complete this valuable task of yours. I assure you timely completion, originality and grammatically correct content, according to your needs. Please feel free to contact me for completion of this task. Thank you very much.

Offer: $60

Fardeen W.


Fardeen W.


This project is my strength and I can fulfill your requirements properly within your given deadline. I always give plagiarism-free work to my clients at very competitive prices.

Offer: $35

Ready To Place An Order? Its Free!

Attachment 1


Tables and graphs for monitoring temporal crime trends: Translating theory into practical crime analysis advice

Andrew P Wheeler John F. Finn Institute for Public Safety, USA

Abstract This article is a practical review on how to construct tables and graphs to monitor temporal crime trends. Such advice is mostly applicable to crime analysts to improve the readability of their products, but is also useful to general consumers of crime statistics in trying to identify crime trends in reported data. First, the use of percent change to identify significant changes in crime trends is critiqued, and an alternative metric based on the Poisson distribution is provided. Second, visualization principles for constructing tables are provided, and a practical example of remaking a poor table using these guidelines is shown. Finally, the utility of using time series charts to easily identify short- and long-term increases, as well as outliers in seasonal data using examples with actual crime data is illustrated.

Keywords Tables, time series, crime statistics, crime analysis, crime trends

Submitted 04 Oct 2015, accepted 14 Mar 2016


Crime statistics are invariably presented in tables and

graphs. Such statistics are regularly presented in police

departments, either internally at regular meetings or exter-

nally for crime reporting purposes and public information

awareness (Boba Santos, 2012; Manning, 2008; Wilson,

1957). Although several textbooks exist to help guide ana-

lysts in making and presenting crime maps, there is rela-

tively little advice about making tables and time series

graphs oriented specifically towards crime analysts and

police practitioners. This article fills that gap by providing

practical advice on presenting tables and graphs to monitor

temporal crime trends, and aims to translate statistical

advice and the science of how we visualize data into

improved crime analysis reports. This is important to

improve understanding and efficiency not only for crime

analysts, but also for those consuming the graphs, such as

police command staff and the general public.

The main motivation for monitoring crime over time is

typically to identify outlying trends in relation to historical

values, for example, the number of burglaries are high this

month. As an example, Figure 1 displays a table publicly

published by the New York City Police Department

(NYPD). Such tables are standard for CompStat meetings,

as well as disseminating crime statistics in public reports.

These tables are not idiosyncratic to CompStat, however,

and similar ones are used regularly to report crime


When a current crime trend is identified as being sub-

stantively high compared with historical numbers, presum-

ably the police department will take specific action to

attempt to reduce those types of crimes in the future. A

response can be as simple as notifying detectives that a

serial offender may be at large, or taking more extreme

measures such as allocating extra patrols to address the

problem. Showing falls in historical numbers can be used

as a performance tool to see if a particular action has effec-

tively decreased crime. Or the obverse when crime statis-

tics are increasing, as a way to pressure individuals within

Corresponding author:

Andrew P Wheeler, John F. Finn Institute for Public Safety, 421 New

Karner Rd. Suite 12, Albany, NY 12205, USA.

Email: [email protected]

International Journal of Police Science & Management

2016, Vol. 18(3) 159–172 ª The Author(s) 2016

Reprints and permission: sagepub.co.uk/journalsPermissions.nav

DOI: 10.1177/1461355716642781 psm.sagepub.com

the department to take some action (Eterno and Silverman,

2010). Finally, showing when crime has not substantively

increased compared with historical numbers is just as use-

ful. This would prevent extra resources being devoted to

chasing the noise, or provide a more accurate presentation

of crime statistics to the public. It is often the case that a

few crimes can be turned into a crime wave by the media

(Sacco, 2005).

Effectively distinguishing what is high or low compared

with historical numbers is often demonstrated by showing a

Figure 1. Example CompStat report publicly released by the NYPD. Reports can be obtained from: www.nyc.gov/html/nypd/html/ crime_prevention/crime_statistics.shtml

160 International Journal of Police Science & Management 18(3)

percentage change. This article first presents reasoning to

explain why percentage change is a very poor statistic to

use when monitoring any time series, especially one of

crime counts. In place of percent change, an alternative

metric, a Poisson z-score, is provided. The article also dis-

cusses data visualization principles when constructing

tables, as well as the utility of graphics to monitor time

series. This will improve the utility of tables and graphs

for both crime analysts and command staff using such

tables internally, as well as presenting the information in

a more effective way to the public.

Use of tables and graphs in policing

Data-driven approaches to reducing crime and improving

safety, with examples such as CompStat (Silverman, 1999)

and hot-spot policing (Sherman and Weisburd, 1995), are

the norm in modern police agencies (Coldren et al., 2013;

Randol, 2014). The actuarial approach to allocating

resources strategically is not new—as far back as 1957,

O.W. Wilson suggested allocating resources to times with

more calls to the police (Wilson, 1957). The advancement

of data technology has changed the capabilities of statisti-

cal analysis, however, from long-term prospective planning

to more real-time analysis. Maps and graphs can be used as

managerial tools, either to take action in the face of an

emerging trend or as a measure of current performance

(Bonkiewicz, 2015; Davis et al., 2015).

The majority of the literature on police use of data has

focused on the adoption of crime mapping (Roth et al.,

2013), and hot-spot policing has regularly been shown to

be effective in reducing crime (Braga, 2001). Although

examples of pin maps can be found before modern mapping

technology (Harries, 1999), the introduction of the specific

crime analyst vocation is likely the foremost instrument to

adopting data-driven policing strategies (Boba Santos,


Less academic attention has been paid to the use of

tables and time series graphs in decision-making, although

they are just as integral and are regularly used in data

analysis. One counter example is Guilfoyle (2015), in

which a survey was conducted in presenting a table of

crime counts comparing one month to the prior month,

along with a percent change. Guilfoyle (2015) found that

the majority of officers used this simplistic information to

infer a more general trend that crime was increasing over-

all, although insufficient data was given to make such an


Whereas Guilfoyle (2015) focused on how statistics in

simple tables are interpreted by police actors, this article

focuses on simple improvements to tables and graphs to

improve the data supplied and the readability. This article


� a metric with various improvements over percent change in monitoring crime counts;

� data visualization principles applied to making tables; and

� constructing time series charts to monitor temporal crime trends.

Although each section has most practical utility to the

application of crime analysis, there are greater lessons in

interpreting temporal trends in crime counts for all police

agents, as well as the general public. In particular, know-

ing how percent change is a poor metric for evaluating

change in crime counts, and the possible swings in random

data, provide a warning for the consumers of crime sta-

tistics. Graphical advice applied to making tables is also

applicable to researchers, because most quantitative

research includes tables, and the advice given is not in

any way particular to crime analysts.

Percent change encourages chasing the noise

As shown in the example table from the NYPD (Figure 1),

percent changes based on prior crime counts are regularly

reported in tables of crime statistics. This section discusses

why using percent change to monitor temporal crime

trends, especially counts of crimes that occur infrequently,

is very problematic. The motivation to monitor temporal

crime trends is to identify when current numbers are out-

liers compared with historical numbers. The nature of per-

cent change makes it very difficult to know how large a

percent change is an outlier.

Any particular statistical estimate has variance. The

variance of an estimate will tell you how often that esti-

mate will have lower or higher measures. As a hypothe-

tical example, imagine a department writes an average of

100 traffic tickets per month. If the variance of that esti-

mate is 102, having a month with 80–90 tickets written

would be quite common. However, if the variance were

lower, say 42, months with 80–90 tickets would be very

rare. Knowing the variance of an estimate allows one to

identify changes that are likely by chance and those that

are not likely by chance.

The variance of any metric, including percent change,

will be some function of the variance of the items that make

up that metric. The formula for percent change is:

Percent Change ¼ Post� Pre Pre

A useful re-expression of this equation is:


Pre � Pre

Pre ¼ Post

Pre � 1

So the variance of percent change is based only on the

ratio of Post and Pre—which in our example are the counts

Wheeler 161

of crime per some time unit, such as months. The variance

of this ratio is not generally defined, but the more often the

value of Pre is close to zero, the larger the variance

becomes. (A related problem is that the measure is not

defined at all when Pre equals zero exactly.) So, the percent

change metric has a different variance depending on the

how often the crimes occur, and when the crimes occur

infrequently the variance is likely to be quite high (if it can

be calculated at all).

The implications of this point are simple to explain

using some examples. If the prior value is 5 crimes, and

this then increases by 3 to 8 crimes, the percent change will

be ð8� 5Þ=5 ¼ 3=5 ¼ 60%. However, if the prior value happened to be 4 crimes and this increased to 8, what

happens to be a minor difference in only one crime in the

baseline turns into a much larger percent change,

ð8� 4Þ=4 ¼ 4=4 ¼ 100%. Why does not having a defined variance matter? Percent

change makes it very easy to be fooled into thinking there is

a significant increase in crime compared with historical

numbers when really there is no change. This is called a

false positive, and any action taken by the police depart-

ment in response to a false positive will result in wasted


As an example, one might say that a 20% change is cause for concern, and if you knew the variance of the

percent change you could tell how often this 20% cut-off would produce a false positive. Typically, one wants to set

the rate of false positives very low, and standard scientific

conventions are either 5 times in 100 or fewer. With per-

cent changes, one cannot set a value to maintain a consis-

tent false-positive rate; for example, you cannot tell how

often a percent change of 20% or larger happens. It may happen 1 time in 100, 10 times in 100 or much more fre-

quently, even in random data. If one always assigns extra

patrols in response to a significant increase in crime in a

particular area following a monthly meeting, and the false-

positive rate was 1 in 100, you would only assign an extra

patrol by accident typically once every 8 years. However, if

the false-positive rate was 10 in 100, you would assign an

extra patrol by accident around once a year.

To illustrate this, I simulated a set of Poisson distributed

random variables with means of 5, 10, 20, 50 and 100

representing counts of crimes by months over 10 years.1

A Poisson distribution is often used to model counts of

events occurring in a particular interval. Crime data will

never be exactly Poisson distributed, but it is often a rea-

sonable approximation (Maltz, 1996).

By construction, these simulated variables have no

trends or outliers. Percent changes were then calculated

using the same month in the prior year as the Pre value,

so the simulation includes a total of 108 calculations of the

percent change. Figure 2 shows the histograms of these

statistics. Note by construction, this is what happens when

there are literally no changes in the distribution, any per-

cent change that might be flagged as noteworthy would

simply be chasing the noise. Subsequently, any response

by the police department to high percent changes would be

wasting resources.

One can clearly see that the smaller mean of the time

series results in a much wider variance of percent changes.

The ranges for the samples here are listed in Table 1. In this

simulation, there is a clear bias for positive changes and,

even though the higher mean time series has a smaller var-

iance, the variance of the estimator still contains what would

likely be considered noteworthy swings of over 20%. That is, even for crimes with an average of over 100 occurrences

per month, percent changes of over 20% are not rare when the underlying distribution does not change.

The positive bias comes from the fact that percent

change is not symmetric. For example, an increase from

4 to 5 crimes is a 25% increase, whereas a decrease from 5 to 4 crimes is only a 20% decrease. This has the undesirable effect that one is more likely to think that a temporal crime

trend is significantly increasing than decreasing when using

percent change. Percent change is more likely to fool you

into thinking crime is increasing than it is decreasing.

If one were using these simulations to set the false-

positive rate, an absolute percent change of over 30% would be needed to have a false-positive rate under 10% for the time series with a mean of 100. The ranges from the

simulation in Table 1 provide an estimate for a false-

positive rate slightly under 1%. In my experience, swings of much smaller percent changes are typically considered

noteworthy. A 10% false-positive rate is not particularly good either. If one were doing a monthly report, a 10% false-positive rate would result in an average of over one

false positive per year. In practical situations, a much lower

false-positive rate would be needed for a metric to be an

effective signal for a police department to take action.

This is a problem because very large percent changes are

needed to be considered unusual compared with historical

numbers, but the statistic is not invariant to the distribution

of the prior time series. It is likely known that time series

with smaller counts have typically wider percentage

changes, but this makes it difficult to know when there is

a significant change in the prior time series. Not just sta-

tistically significant, but meaningfully significant. Consu-

mers of the statistics need to make mental guesses as to

what are reasonably large percent changes, and then vary

those guesses based on the baseline of historical numbers.

This presents an additional cognitive burden on the con-

sumer, and the percent change metric encourages chasing

the noise, especially for increases in crimes, because the

most variable percent changes will, on average, produce the

largest statistics.

162 International Journal of Police Science & Management 18(3)

A simple alternative to percent change

Part of the motivation for using percent change in tables

is likely that it is both easily calculated and easily

understood, as well as historical inertia. Here, an alter-

native statistic that is easily calculated and has a better

behaved distribution with which to control false posi-

tives using typical crime data is presented, a Poisson


Presuming the crime series is a Poisson-distributed

random variable, a simple standardized metric can be cal-

culated as:

2 � ð ffiffiffiffiffiffiffiffiffi

Post p

� ffiffiffi

� p Þ

Where � is the long-term average of the distribution prior to Post. I refer to this metric as a Poisson z-score.

This normalizes the distribution of the statistic to have a

mean of 0 and a variance of *1, with higher means being closer to the normal approximation. This is because for

Poisson-distributed random variables, the mean is equal

to the variance. So, taking the square root makes the var-

iance of a Poisson-distributed variable with different means

the same. The multiplication by 2 is to transform the sta-

tistic to have a variance of 1, so that usual normal distri-

bution tables can be used to determine the approximate

false-positive rate. This statistic is also defined when the

prior value is zero. So, unlike percent change, one knows

how large a value of this statistic is needed to flag a change

in the time series as significant. In addition, the value does

not change even if the prior series averages 5 crimes a

month or 100 crimes a month.

Figure 3 presents a set of histograms using the standar-

dized metric instead of percent change using the same

simulated data as presented in Figure 2. Although the nor-

mal approximation is more accurate for the series with a

larger mean, one can see that the spread of the distributions

is more similar between the panels than the percent change,

are symmetric about zero, and are not biased. A typical

outlying value is then simply +2 for a false-positive rate of * 5%.

Considering that analysts are frequently monitoring

many series, a more stringent rule for a change in the prior

Figure 2. Histogram of percent change values for simulated random Poisson data with different means.

Table 1. Ranges of percent change based on simulation.

Mean Low % High %

5 –86 400 10 –73 200 20 –67 123 50 –31 61

100 –26 43

Wheeler 163

series may be considered. If one uses 3 as the rule to alert a

change as significant this only causes a false-positive rate

of slightly less than 3 in 1,000. (Considering only positive

changes it would be less than 2 in 1,000.)

The utility of the Poisson z-scores over percent change

are then as follows.

� The statistic is invariant to the baseline average in the crime series. So one does not need to make men-

tal adjustments to know what is a significantly large

percent increase based on how small or large the

prior series is.

� Increases and decreases are symmetric, so the statis- tic is not biased to flag increases as statistically sig-

nificant more often than decreases.

� A consistent false-positive rate for all crime series being monitored can be set. So, if one is monitoring

10 different series per week, and one only flags an

increase as statistically significant if it has a Poisson

z-score of over 3, this will produce a false-positive

rate around 2 times in 1,000. This will mean there is

only around 1 false positive in a year for the 10 dif-

ferent series (10 series times 52 weeks is 520 obser-

vations) using the Poisson z-scores. With percent

change, one will likely have a much larger number

of false positives, wasting many more resources on a

regular basis.

In reality, crime data are not exactly Poisson distributed.

But, the + 3 rule for the Poisson z-scores is later shown to have very close to the expected number of false positives

while monitoring a set of five different crimes series by

week over 2 years. This offers a substantial improvement

over percent change.

Although interpreting the statistic is more difficult than

interpreting percent change, it can be presented on a simple

scale showing when to flag an increase or decrease as being

significantly different from historical values. Instead of a

percent change being shown in a table, the Poisson z-score

can be inserted. This can be interpreted as an absolute value

of 2 or more being some evidence of change, an absolute

value of 3 or more being stronger evidence of change, and

an absolute value of 4 or more being quite strong evidence

of change. Positive values then signify an increase in crime,

and negative values signify a decrease in crime.

Making better tables

Just like a poor map, a poor table can hinder the reader from

interpreting the data contained therein correctly. In general,

Figure 3. Histograms of Poisson z-scores for simulated random Poisson data with different means.

164 International Journal of Police Science & Management 18(3)

one thinks of maps and graphs when discussing the princi-

ples of data visualization. But in terms of simply displaying

data, tables are a type of data visualization. They use rows

and columns to display numerical values, instead of the

points, bars or lines on a graph. Consequently, we can take

lessons on how to display graphs effectively and apply

those to how we make tables.

General data visualization principles applied to making

quality tables (or graphs) can be taken from gestalt theories

of visual perception (MacEachren, 2004). Although a

review of all of these principles is beyond the scope of this

article, the main principle of interest is the concept of prox-

imity, items closer together will be perceived as a group.

Using gestalt principles, one can construct tables to make

particular intended comparisons, or identify characteristics

that actively harm making particular comparisons.

In general, one wants to construct the table so that com-

parisons are made within perceived groups, because it

requires more effort to make comparisons between individ-

ual items in separate groups. A simple example is that

numbers are typically organized in columns. It is easier

to calculate the difference between two numbers that are

right aligned in one column than two numbers aligned in a

row. Although this section discusses ideas more specific

than proximity, such as the use of color, gridline placement

and ordering the columns, the general guiding principle in

designing effective tables is that the design should aid the

intended comparisons, not impede them.

As a practical example, this section takes a table and

reconstructs it using these general data visualization prin-

ciples specially oriented towards tables (Feinberg and Wai-

ner, 2011). Figure 4 shows a replica of a table (upper),

which is then reconstructed (lower). The motivating exam-

ple is left uncited, but it was taken from a recent award-

winning example in the statistical reports category of the

International Association of Crime Analysts. Although it is

a specific example, the reasoning around each of the

changes can be applied more generally.

When critiquing a particular data visualization, one

needs to be clear about what specific information the visua-

lization (or table here) is intended to convey (Kosara,

2007). The two main motivations for crime statistics tables

are to identify increases in crime trends and simple report-

ing of numbers (Behn, 2008; Dabney, 2010). The advice

given here is applicable to both.

The most obvious aspect of the table is the gratuitous use of

color. There are several reasons why the color scheme chosen

for the table is problematic. To describe the color scheme

itself, there are five colors in the table; dark blue for the

column and row headers, dark green for large decreases, yel-

low for small decreases, orange for small increases and red for

large increases. There are some inconsistencies in how the

colors are applied, for example, year-to-date homicides

increased from zero in 2012 to two in 2013, but those cells

are colored green and the percent change is colored orange, so

the exact data specifications used for the color scheme are

unclear. There is no legend in the report specifying what each

color represents, so these are best guesses.

For specific critiques of the color itself there are three

problems. First, people with red–green color blindness will

not be able to tell the difference between the green large

decrease and the red large increase anchors (a similar prob-

lem occurs when printing in black and white). Around 8% of the male population have some type of color deficiency

(MacEachren, 2004), with red–green deficiencies being the

most common.

Second, the color scheme has no perceptual basis for

rank ordering the values (Moreland, 2009), for example,

one is unlikely to know that dark green means a larger

decrease than yellow, and orange means a slight increase.

Yellow is a poor choice, because individuals frequently

rank the magnitude of colors based on saturation, at least

given certain slices of color ranges (MacEachren, 2004).

That is, darker colors are often taken to mean a larger

Ward 2012 2013 Z 2012 2013 Z

Hom 0 0 0 0 2 3 Rob-Bus 0 1 2 1 4 2 Rob-In 3 3 0 31 46 2 Rape 1 0 -2 5 4 0 Ass-Agg 12 14 1 93 94 0 Larceny 42 36 -1 262 262 0 Auto Th� 4 1 -2 26 25 0 Burg-Res 36 30 -1 191 189 0 Burg-Non 3 1 -1 11 9 -1 Burg-Bus 7 2 -2 23 16 -2 Total 108 88 -2 643 651 0

Year to DateMonth to Date

Figure 4. (Upper) Original replicated table based on an unnamed agency’s regular statistical reports. (Lower) Re-creation of the table based on data visualization principles. The original report did not contain any further descriptions of what the crime cate- gories specifically referred to.

Wheeler 165

change or be of greater interest, for example, bright red is

more important than light red or light blue. Yellow by its

nature has a high saturation, there is no such thing as dark

yellow, and so it commands disproportionate visual weight

compared with its ranking as only slightly decreasing. Yel-

low also tends to reproduce poorly when projected or in

print—so the yellow on the computer screen for the person

initially making the report is not likely to be the same

yellow as viewed in the report.

Third, color decreases the readability of the text (Few,

2008), which will again be exacerbated if the report is

printed. If a color scheme to visualize a range of values

is to be chosen, I would recommend one based on Color-

Brewer palettes (Harrower and Brewer, 2003), with the

blue to red diverging scheme appropriate for data that can

take on negative or positive values. However, the saying ‘if

you focus on everything you focus on nothing’, is particu-

larly apt here. Coloring the entire table visually emphasizes

everything, and so nothing immediately stands out to the

viewer. If the intent was for the red high increase cells to be

the most salient features of the table, use of the other colors

detracts from that goal.

In the re-creation of the table, the categories with the

biggest Poisson z-score in the year-to-date column are

sorted in descending order. The Poisson z-scores are

rounded to the nearest integer, because it is likely that the

error in using the z-scores does not justify accuracy to

tenths. Frequently, tables include an inappropriate number

of decimal places that decreases their readability, and

Feinberg and Wainer (2011) suggest no more than three

significant digits in the table if possible. Using a +2 rule, several changes would be flagged in the table, but using the

more conservative +3 the only change that is significant is homicides. It is likely that if an average of prior homicides

as opposed to zero were used it would not be flagged as a

significant increase. However, to illustrate how one can

highlight such a difference, a light orange shade is shown

in the year-to-date increase in homicides. This clearly high-

lights the year-to-date homicides against the rest of the

chart. This light orange looks similar to the light gray rows

when printing out in gray scale, so in addition, the values

are bold in that section. Even printed in gray scale, the bold

section stands out against the rest of the table (Feinberg and

Wainer, 2011).

Also, the table maintains the order of items based on

violent and non-violent crimes, and includes shared crimes

(e.g. Burg-Res adjacent to Burg-Non) next to each other.

Because the tables might be intended for looking up crime

statistics, it may be advisable to not sort the rows, but rather

to keep them at consistent locations. Otherwise data-based

sorting should be preferred over simple alphabetical order-

ing, because it can show patterns in the data and give pre-

cedence to particular categories. Here, sorting happens to

show that there were slight decreases for all of the month-

to-date property crimes. A clear alternative for crime sta-

tistics is to order the rows of the table by violent and non-

violent crimes. This likely facilitates look up to a greater

extent than does alphabetical ordering.

Zebra stripes are used to delineate headers, every

other row and the total row. It is easier to make com-

parisons down columns than across rows, so here zebra

stripes (Enders, 2007; Lee et al., 2014) help aid the

across the row comparisons, so one does not accidently

compare the month-to-date changes in auto-thefts with

year-to-date changes in burglaries. Although the evi-

dence for the utility of zebra stripes is slight, they have

been shown to be aesthetically preferred by consumers

(Enders, 2008).

The header row is simply given a larger font and

this, in addition to the slightly darker background

shade, provides a clear hierarchy for the columns. An

additional technique to create a hierarchy would be to

use a different font type, and bolder sans-serif fonts are

typically used for titles or headers (Lupton, 2010).

Using a larger font is sufficient here, however, and due

to the small space and printing, more exotic typefaces

are potentially problematic (Boba Santos, 2012). In

small type, the serifs in fonts also tend to be printed

poorly, so the sans-serif font Calibri (currently the

default in Excel) is a fine choice, although many others

would be sufficient. Not making the column or row

labels bold allows one to use bold type selectively to

highlight the increase in the year-to-date homicides.

More generally, it makes it easier to read the table and

know what columns and rows correspond to what spe-

cific information.

Thin borders are selectively used to group month-to-

date and year-to-date comparisons. The use of grid lines

for every cell and row of the table tends to create Moiré or

scintillating patterns (Tufte, 2001). Again, the use of very

light borders groups the types of comparisons the table is

intended to make and prevents inappropriate comparisons

(like the z-score for the month-to-date against the year-to-

date numbers). Numbers are right aligned, but are padded

against the column so that they fall almost directly beneath

the column header. When using grid lines this is important,

because the right-aligned numbers often bump against the

edge of the table.

The use of color in the original table has the opposite

effect to the zebra stripes. It creates regions of similar color,

so it is more difficult to follow a single row (Lee et al.,

2014), and what comparisons are intended is confusing for

the viewer. For this reason, one should not use different

colors within a set of numbers intended to go together and

be compared, as is the case with the green homicide counts,

but the orange percent change in the original table.

166 International Journal of Police Science & Management 18(3)

Time series charts for monitoring crime series

Unfortunately, crime counts are never exactly Poisson dis-

tributed. Often the series have over-dispersion, which

occurs when the variance of the series is greater than the

mean (Berk and MacDonald, 2008; Osgood, 2000). Often

this occurs in crime data when there are many zeroes, and

then spurts of higher activity. Possible mechanisms that

cause this are crime sprees from the same offender(s) (Her-

ing and Bair, 2014), or reciprocating violence (Branting-

ham et al., 2012; Loftin, 1986). A Poisson series assumes

independence between the inter-arrival times of events, and

the above are realistic examples of dependence between

events. Less often a crime series has under-dispersion, the

variance is smaller than the mean, however, this does occur

when a time series shows strong persistence. An example of

a cause of under-dispersion may be a chronic, serial offen-

der operating on a regular time schedule (e.g. a chronic

burglar breaking into a few houses once every month).

Because of these deviations from a Poisson distribu-

tion, the normal approximation of the Poisson z-scores

suggested prior will not be perfectly accurate. In the case

of over-dispersion, the variance of the estimator will be

wider than usual, so there will be a larger number of false

positives. In the case of under-dispersion, the variance

will be smaller.

In light of these complexities, a simple, nonparametric

alternative is suggested: simply graphing the trends.

Figure 5 is the same time series chart of the random vari-

ables discussed above. It is clear from the chart that there

are no trends in the data, and typical month-to-month var-

iations are easily observed given the historical trends. Our

eyes do a better job than the statistics in identifying

abnormalities in the data (Buja et al., 2009; Maltz, 2010).

To illustrate the utility with actual crime data, Figure 6

is a small multiple chart (Cleveland, 1985; Tufte, 2001)

that was initially prepared for a weekly command staff and

intelligence sharing meeting. The actual weekly crime data

are shown as a light red line. The black line represents

the moving average of the prior 8 weeks (approximately

the prior 2 months), and the gray band is the +3 Poisson z-scores based on that moving average. The most recent

week is displayed as a red dot on top of all the other chart

elements, so the current week can be easily seen in refer-

ence to the gray bands.

The graph shows several trends in the high-density dis-

play. First, one can see that the red line is very noisy, but is

mostly covered by the Poisson z-scores, with the exception

of the lower bound for thefts of motor vehicles.2 This

Figure 5. Time series graph of simulated data.

Wheeler 167

shows with actual data that the Poisson z-scores are a rea-

sonable approximation to flag significant increases. Cover

here is a statistical property, in that the range of the Poisson

z-scores should contain the observed data a given propor-

tion of the time. The graph easily illustrates this property,

the observed data are covered if the red line is within the

gray bands. A metric with poor coverage would frequently

be outside the confidence bands, and so would flag patterns

too frequently as significant changes compared with the

historical values.

The choice of a prior moving average of 8 weeks is ad

hoc, but has produced reasonable coverage of the historical

data. Effectively visualizing and conveying uncertainty in

the estimates are difficult (Spiegelhalter et al., 2011), but

with the moving average, one can see the typical values for

each crime, see any longer term trends and monitor whether

a particular week is outlying. The y-axis scale is trans-

formed to a square root scale, but labelled in the original

counts. This brings the different series closer to one another

in the small multiple charts, and also shows how the square

root stabilizes the variances of the different series.

Although the embellishments of the moving average and

the error bands introduce complexities into the chart, even a

chart of just the original time series shows a clearer picture

of the current data versus historical values than a single

number. However, the original data are very erratic, a com-

mon occurrence with low count crime data. Thus, the error

bands and the moving average actually simplify the data,

and provide a guide for the eye to identify both long-term

trends and whether the current series is outlying (Tukey,


Over a period of nearly 3 years, burglaries, motor vehi-

cle theft and violent interpersonal (the sum of aggravated

assaults and robberies) merely meander about a constant

mean. This suggests that any weekly variation in these

three series is simply noise, and there is effectively no

evidence to suggest that any of the series has shown sig-

nificant increases or decreases during the period. Any extra

Figure 6. Weekly crime statistics. The light red lines are the observed counts of crime. The black line is the moving average of the prior 8 weeks. The gray bands are plus and minus three Poisson z-scores based on the prior moving average. The red dot is the last observed week. Violent inter. includes aggravated assaults and robberies. Larcenies include only Part 1 larcenies. M.V., motor vehicle.

168 International Journal of Police Science & Management 18(3)

time devoted to analysis or problem-solving for these par-

ticular series in response to short-term fluctuations (up or

down) is equivalent to chasing dots on a time series graph

instead of dots on a map! Larcenies and thefts from motor vehicles show some

interesting crime waves that are partly explained by non-

traditional seasonal trends and the behavior of some serial

offenders. The town has a large enough college presence to

show marked increases when students come to and leave

town. A problem crime is frequently thefts of small elec-

tronics from motor vehicles (Brimicombe, 2012), with a

large proportion being from unlocked vehicles. Increases

in thefts from motor vehicles throughout 2012 brought

increased attention to community groups and the media,

and the thefts precipitously decreased to 10-year lows

beginning in 2013 after one prolific offender was appre-

hended, who later admitted to around 40 thefts in the prior

2 months. This precipitous fall in crime clearly showed that

the arrest and public awareness campaign were effective in

reducing thefts from motor vehicles.

The original simulated time series are flat, but more

realistic crime data will have seasonal trends, for example,

more crimes in the summer than in the winter. A conveni-

ent display for such data is a seasonal chart (Hyndman and

Athanasopoulos, 2014), where the months are on the x-axis

and each year gets a new line. Figure 7 displays a seasonal

chart showing burglaries per month for the years from 2004

through November 2014. The red line shows an outlier year

in 2011, and the blue line shows 2014 until November. In

retrospect, the 2011 outlying increases in burglaries were

due to thefts of copper from vacant buildings (Sidebottom

et al., 2014). They have since declined to an average of

around two per month. One can see that if such charts were

in use as of 2011, the numbers of burglaries in April

through August 2011 were substantially higher than in any

of prior 7 years.

Two years of interest are highlighted using brighter and

thicker lines, whereas past years are presented as lighter

gray lines that so they fade into the background (Kosslyn,

1994; Lofgren, 2012). The many lines are often compli-

cated, but drawing the lines as thinly as possible and mak-

ing them semi-transparent helps disentangle the spaghetti

and focus on the overall patterns. Instead of attempting to

draw confidence bands to represent the error in the prior

series, prior years are used as a guide stick to illustrate the

typical year-to-year variation by month.

This chart easily displays whether the new data is an

outlier compared with the prior years given the seasonality

in the series, as is the case with June 2014. Directly label-

ling the highlighted years within the charts allows the chart

to be printed in gray scale and still easily tell which line is


Both types of chart use the human eye to easily take into

account historical variation in the series. The time series

chart can identify long-term upward or downward trends,

especially with the aid of the moving average, whereas the

seasonal chart can identify if an observed count is an outlier

compared with prior counts around the same season in prior

years. Both give context to the current crime counts,

whereas a simple percent change (or any single metric) in

a table does not illustrate the nature of current rises or falls

in crime statistics. For example, whether the current count

is an outlier compared with historical numbers, or if there is

a long-term increase or decrease in the crime counts. Per-

cent change, in particular, is fragile to whether the baseline

value was high or low. If the prior year happened to be low,

one may spend an entire year having to explain large per-

cent increases. The example graphs as presented easily

bring such long-term trends to light.


Using analytics to monitor spatial and temporal trends is a

regular undertaking in modern police departments.

Although several textbooks are currently devoted to crime

mapping, much less attention has been paid to how to

appropriately monitor crime over time and present those

figures for general use. Rachel Boba Santos’s book, Crime

Analysis with Crime Mapping (2012) is one exception, but

even here much more effort is spent on crime mapping than

analyzing temporal trends. This is despite the fact that

monitoring crime over time is just as regular a task for

crime analysts and police planners.

This article first discusses the problematic aspects of

using percent change to monitor changes in crime statistics,

Figure 7. Seasonal charts of burglaries. The red line highlights 2011, which was a high year due to thefts of copper from vacant properties. The blue line is for 2014 as of November.

Wheeler 169

and provides an alternative metric to flag statistically sig-

nificant changes assuming the crime series is distributed as

a Poisson random variable. Using simulations demonstrates

how percent changes are problematic. Although crime data

are not likely to be exactly Poisson distributed, using an

example with actual crime data shows how the +3 Poisson z-scores have reasonable coverage rates. Although not

perfect, such an approach offers clear improvements over

percent change, while still being easy to calculate.

This article also discusses data visualization principles

designed to improve presentation in statistical tables, and

recreates a typical crime statistics table to make it more

readable. The article then presents the use of time series

charts to monitor complicated crime trends that cannot be

easily reduced to one simple metric. Two types of time

series graphs are illustrated; one with additional running

means and standard errors to visualize long-term upward

or downward trends, and a second seasonal chart that can

be used to spot outliers quickly even in highly seasonal


Although the article presents the utility of using time

series charts, it is not expected that they will replace the

standard reporting of crime statistics in tables. Tables are

important for reporting exact numbers, and it is the case

that some individuals may simply prefer and better under-

stand tables (Friendly and Kwan, 2011). This is why per-

centage change is shown to be a poor metric, and why the

alternative Poisson z-score is given in its stead. It is also

why data visualization principles in constructing tables are

given. Although such practices are not likely to be easily

changed, articulating the problems with current practices

and offering potential substitutes is the end goal of translat-

ing scientific research into practice.

Other procedures exist to monitor crime statistics, spe-

cifically control charting (Gorr and Lee, 2014; Rogerson

and Sun, 2001), however, visualizing the time series is a

simple and effective alternative. This is not a critique of

control charting, but the complexities of identifying appli-

cable limits with complicated seasonal data is a non-trivial.

Using the formula 2 � ð ffiffiffiffiffiffiffiffiffi

Post p

� ffiffiffiffiffiffiffi

Pre p

Þ is easily accom- plished in any spreadsheet program (Tollenaar and van der

Heijden, 2013). Plotting the data is easily accomplished in a

spreadsheet (Few, 2011), easily understood and provides a

powerful graphical tool to identify trends or outliers in the

current values of the crime series. Even if one does not

prefer the Poisson z-scores because crime data often do not

conform exactly to a Poisson distribution, graphing the

time-series data is a common-sense recommendation and

should be common place in statistical reports.

The practical advice in this article is most applicable to

crime analysts, but the utility of effectively presenting data

has uses for all consumers of the statistics. It prevents com-

mand staff or crime analysts from chasing noisy crime

trends. It prevents the general public from being misled

by false notions of increasing crime statistics—especially

when using percent change. Presenting statistics in an intui-

tive and simple to digest matter can also aid others in the

criminal justice organization who do not regularly use sta-

tistics to influence their behavior (Payne et al., 2013). One

should not simply generate such numbers rote (Manning,

2008), but mold such reporting to be an effective tool to aid

in making decisions in the organization.


I thank the Chief of Police for allowing me to use data for their

particular jurisdiction. I also thank Janet Stamatel for reviewing

an initial draft of the article. All views expressed in the article are

my own, and are not reflective of the Finn Institute or the police

department that supplied the data for the examples.

Conflict of interest

The author(s) declared no potential conflicts of interest with

respect to the research, authorship, and/or publication of this



The author(s) received no financial support for the research,

authorship, and/or publication of this article.


1. Percent changes are simply missing when Pre is zero. In this

simulation, this only occurs for one value of the mean 5 series.

2. If the lower bound for the Poisson z-score was negative it

truncated to zero in the chart. The lower bound for thefts of

motor vehicles ends up being fractional in most circumstances,

so rounding down to zero would likely improve the coverage.


Behn RD (2008) The seven big errors of PerformanceStat. Rappa-

port Institute for Greater Boston Policy Briefs. Harvard Univer-

sity John F. Kennedy School of Government. Available at:


manceStatErrors.pdf (accessed 18 January 2015).

Berk RA and MacDonald J (2008) Overdispersion and Poisson

regression. Journal of Quantitative Criminology 24(3):


Boba Santos R (2012) Crime Analysis with Crime Mapping (3rd

edn). Thousand Oaks, CA: Sage.

Boba Santos R (2014) The effectiveness of crime analysis for

crime reduction: cure or diagnosis? Journal of Contemporary

Criminal Justice 30(2): 147–168.

Bonkiewicz L (2015) Bobbies and baseball players: evaluating

patrol officer productivity using sabermetrics. Police Quar-

terly 18(1): 55–78.

170 International Journal of Police Science & Management 18(3)

Braga AA (2001) The effects of hot spots policing on crime. The

Annals of the American Academy of Political and Social Sci-

ence 578(1): 104–125.

Brantingham PJ, Tita GE, Short MB and Reid SE (2012) The

ecology of gang territorial boundaries. Criminology 50(3):


Brimicombe A (2012) Did GIS start a crime wave? SatNav theft

and its implications for geo-information engineering. The Pro-

fessional Geographer 64(3): 430–445.

Buja A, Cook D, Hofmann H, Lawrence M, Lee E, Swayne DF

and Wickham H (2009) Statistical inference for exploratory

data analysis and model diagnostics. Philosophical Transac-

tions of the Royal Society A: Mathematical, Physical and

Engineering Sciences 367(1906): 4361–4383.

Cleveland WS (1985) The Elements of Graphing Data. Monterey,

CA: Wadsworth.

Coldren J, Huntoon A and Medaris M (2013) Introducing smart

policing: foundations, principles, and practice. Police Quar-

terly 16(3): 275–286.

Dabney D (2010) Observations regarding key operational realities in

a CompStat model of policing. Justice Quarterly 27(1): 28–51.

Davis R, Ortiz C, Euler S and Kuykendall L (2015) Revisiting

‘Measuring what matters’: developing a suite of standardized

performance measures for policing. Police Quarterly Online

First. doi: 10.1177/1098611115598990

Enders J (2007) Zebra striping: does it really help? In: Proceed-

ings of the 19th Australasian conference on computer–human

interaction, Adelaide, Australia, 28–30 November 2007,

pp. 319–322. New York, NY: ACM.

Enders J (2008) Zebra striping: more data for the case. A LIST

APART. Available at: http://alistapart.com/article/zebrastri-

pingmoredataforthecase (accessed 8 January 2015).

Eterno JA and Silverman EB (2010) The NYPD’s Compstat:

compare statistics or compose statistics? International Journal

of Police Science & Management 12(3): 426–449.

Feinberg RA and Wainer H (2011) Extracting sunbeams from

cucumbers. Journal of Computational and Graphical Statistics

20(1): 793–810.

Few S (2008) Practical rules for using color in charts. Visual

Business Intelligence Newsletter, Perceptual Edge. Available

at: http://www.perceptualedge.com/articles/visual_business_

intelligence/rules_for_using_color.pdf (accessed 8 January


Few S (2011) Are infovis and statistical graphics really all that

different? Visual Business Intelligence Newsletter, Perceptual

Edge. Available at: http://www.perceptualedge.com/articles/


(accessed 8 January 2015).

Friendly M and Kwan E (2011) Comment. Journal of Computa-

tional and Graphical Statistics 20(1), 18–27.

Gorr WL and Lee YJ (2014) Early warning system for temporary

crime hot spots. Journal of Quantitative Criminology 31(1):


Guilfoyle S (2015) Binary comparisons and police performance

measurement: good or bad? Policing: A Journal of Policy and

Practice 9(2): 195–209.

Harries KD (1999) Mapping Crime: Principle and Practice.

Washington, DC: U.S. Dept. of Justice, Office of Justice Pro-

grams, National Institute of Justice.

Harrower M and Brewer C (2003) ColorBrewer.org: an online

tool for selecting colour schemes for maps. The Cartographic

Journal 40(1): 27–37.

Hering S and Bair S (2014) Characterizing spatial and chronolo-

gical target selection of serial offenders. Journal of the Royal

Statistical Society: Series C 63(1): 123–140.

Hyndman RJ and Athanasopoulos G (2014) Forecasting: Princi-

ples and Practice. OTexts.

Kosara R (2007) Visualization criticism – the missing link

between information visualization and art. In: 11th Interna-

tional conference in information visualization, Zurich, Swit-

zerland, IEEE Xplore digital library. 4–6 July 2007, pp.


Kosslyn SM (1994) Elements of Graph Design, New York, NY:

WH Freeman.

Lee M, Kent T, Carswell CM, Seidelman W and Sublette M

(2014) Zebra-striping: visual flow in grid-based graphic

design. Proceedings of the Human Factors and Ergonomics

Society Annual Meeting 58(1): 1318–1322.

Lofgren ET (2012) Visualizing results from infection transmis-

sion models: a case against ‘confidence intervals’. Epidemiol-

ogy 23(5): 738–741.

Loftin C (1986) Assaultive violence as a contagious social pro-

cess. Bulletin of the New York Academy of Medicine 62(5):


Lupton E (2010) Thinking with Type. New York, NY: Princeton

Architectural Press.

MacEachren AM (2004) How Maps Work: Representation,

Visualization, and Design. New York, NY: The Guilford


Maltz MD (1996) From Poisson to the present: applying opera-

tions research to problems of crime and justice. Journal of

Quantitative Criminology 12(1): 3–61.

Maltz MD (2010) Look before you analyze: visualizing data in

criminal justice. In: Piquero AR and Weisburd D (eds) Handbook

of Quantitative Criminology. New York, NY: Springer, 25–52.

Manning PK (2008) The Technology of Police: Crime Mapping,

Information Technology, and the Rationality of Crime Con-

trol. New York, NY: NYU Press.

Moreland K (2009) Diverging color maps for scientific visualiza-

tion. Advances in Visual Computing, Lecture Notes in Com-

puter Science 5876: 92–103.

Osgood DW (2000) Poisson-based regression analysis of aggregate

crime rates. Journal of Quantitative Criminology 16(1): 21–43.

Payne TC, Gallagher K, Eck J and Frank J (2013) Problem framing

in problem solving: a case study. Policing: An International

Journal of Police Strategies and Management 36(4): 670–682.

Wheeler 171

Randol BM (2014) Modelling the influence of organisational

structure on crime analysis technology innovations in munic-

ipal police departments. International Journal of Police Sci-

ence & Management 16(1): 52–64.

Rogerson PA and Sun Y (2001) Spatial monitoring of geographic

patterns: an application to crime analysis. Computers, Envi-

ronment and Urban Systems 25(6): 539–556.

Roth RE, Ross KS, Finch BG, Luo W and MacEachren AM

(2013) Spatiotemporal crime analysis in U.S. law enforcement

agencies: current practices and unmet needs. Government

Information Quarterly 30(3): 226–240.

Sacco VF (2005) When Crime Waves. Thousand Oaks, CA: Sage.

Sherman LW and Weisburd D (1995) General deterrent effects of

police patrol in crime hot spots: a randomized, controlled trial.

Justice Quarterly 12(4): 625–648.

Sidebottom A, Ashby M and Johnson SD (2014) Copper cable

theft: revisiting the price–theft hypothesis. Journal of

Research in Crime and Delinquency 51(5): 684–700.

Silverman EB (1999) NYPD Battles Crime: Innovative Strategies

in Policing. Boston, MA: Northeastern University Press.

Spiegelhalter D, Pearson M and Short I (2011) Visualizing uncer-

tainty about the future. Science 333(6048): 1393–1400.

Tollenaar N and van der Heijden PGM (2013) Which method

predicts recidivism best?: a comparison of statistical,

machine learning and data mining predictive models, Jour-

nal of the Royal Statistical Society: Series A 176(2):


Tufte ER (2001) The visual display of quantitative information,

Cheshire, CT: Graphics Press.

Tukey JW (1977) Exploratory Data Analysis. Reading, MA:


Wilson OW (1957) Police Planning. Springfield, IL: Thomas.

Author biography

Andrew P Wheeler is a recent graduate from the school of crim-

inal justice at SUNY Albany. He currently works with the non-

profit Finn Institute as a senior research analyst collaborating with

local police departments. His specific research interests involve

crime mapping and analysis.

172 International Journal of Police Science & Management 18(3)

<< /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages false /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness false /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Remove /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages false /ColorImageMinResolution 266 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Average /ColorImageResolution 175 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50286 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 266 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Average /GrayImageResolution 175 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50286 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 900 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Average /MonoImageResolution 175 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50286 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox false /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (U.S. Web Coated \050SWOP\051 v2) /PDFXOutputConditionIdentifier (CGATS TR 001) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown /Description << /ENU <FEFF005500730065002000740068006500730065002000530061006700650020007300740061006e0064006100720064002000730065007400740069006e0067007300200066006f00720020006300720065006100740069006e006700200077006500620020005000440046002000660069006c00650073002e002000540068006500730065002000730065007400740069006e0067007300200063006f006e006600690067007500720065006400200066006f00720020004100630072006f006200610074002000760037002e0030002e00200043007200650061007400650064002000620079002000540072006f00790020004f00740073002000610074002000530061006700650020005500530020006f006e002000310031002f00310030002f0032003000300036002e000d000d003200300030005000500049002f003600300030005000500049002f004a0050004500470020004d0065006400690075006d002f00430043004900540054002000470072006f0075007000200034> >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AllowImageBreaks true /AllowTableBreaks true /ExpandPage false /HonorBaseURL true /HonorRolloverEffect false /IgnoreHTMLPageBreaks false /IncludeHeaderFooter false /MarginOffset [ 0 0 0 0 ] /MetadataAuthor () /MetadataKeywords () /MetadataSubject () /MetadataTitle () /MetricPageSize [ 0 0 ] /MetricUnit /inch /MobileCompatible 0 /Namespace [ (Adobe) (GoLive) (8.0) ] /OpenZoomToHTMLFontSize false /PageOrientation /Portrait /RemoveBackground false /ShrinkContent true /TreatColorsAs /MainMonitorColors /UseEmbeddedProfiles false /UseHTMLTitleAsMetadata true >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /BleedOffset [ 9 9 9 9 ] /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /ClipComplexRegions true /ConvertStrokesToOutlines false /ConvertTextToOutlines false /GradientResolution 300 /LineArtTextResolution 1200 /PresetName ([High Resolution]) /PresetSelector /HighResolution /RasterVectorBalance 1 >> /FormElements true /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MarksOffset 9 /MarksWeight 0.125000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] /SyntheticBoldness 1.000000 >> setdistillerparams << /HWResolution [288 288] /PageSize [612.000 792.000] >> setpagedevice

Attachment 2


6 Ways to Look More Confident During a Presentation by Kasia Wezowski APRIL 06, 2017

Several years ago, colleagues and I were invited to predict the results of a start-up pitch contest in Vienna, where 2,500 tech entrepreneurs were competing to win thousands of euros in funds. We observed the presentations, but rather than paying attention to the ideas the entrepreneurs were pitching, we were watching the body language and microexpressions of the judges as they listened.


We gave our prediction of who would win before the winners were announced and, as we and the audience soon learned, we were spot on. We had spoiled the surprise.

Two years later we were invited back to the same event, but this time, instead of watching the judges, we observed the contestants. Our task was not to guess the winners, but to determine how presenters’ non-verbal communication contributed to their success or failure.

We evaluated each would-be entrepreneur on a scale from 0-15. People scored points for each sign of positive, confident body language, such as smiling, maintaining eye contact, and persuasive gesturing. They lost points for each negative signal, such as fidgeting, stiff hand movements, and averted eyes. We found that contestants whose pitches were rated in the top eight by competition judges scored an average of 8.3 on our 15-point scale, while those who did not place in that top tier had an average score of 5.5. Positive body language was strongly correlated with more successful outcomes.

We’ve found similar correlations in the political realm. During the 2012 U.S. Presidential election, we conducted an online study in which 1,000 participants—both Democrats and Republicans—watched two-minute video clips featuring Barack Obama and Mitt Romney at campaign events delivering both neutral and emotional content. Webcams recorded the viewers’ facial expressions, and our team analyzed them for six key emotions identified in psychology research: happy, surprised, afraid, disgusted, angry, and sad. We coded for the tenor of the emotion (positive or negative) and how strongly it seem to be expressed. This analysis showed that Obama sparked stronger emotional responses and fewer negative ones. Even a significant number of Republicans—16%— reacted negatively to Romney. And when we analyzed the candidates’ body language, we found that the President’s resembled those of our pitch contest winners. He displayed primarily open, positive, confident positions congruent with his speech. Romney, by contrast, often gave out negative signals, diminishing his message with contradictory and distracting facial expressions and movement.

Of course, the election didn’t hinge on body language. Nor did the results of the start-up competition. But the right kinds of non-verbal communication did correlate with success.

How can you send out the same signals—and hopefully generate the same success? At the Center for Body Language, we’ve studied successful leaders across a range of fields and identified several positions which are indicators of effective, persuasive body language.

The box


Early in Bill Clinton’s political career he would punctuate his speeches with big, wide gestures that made him appear untrustworthy. To help him keep his body language under control, his advisors taught him to imagine a box in front of his chest and belly and contain his hand movements within it. Since then, “the Clinton box” has become a popular term in the field.

Holding the ball


Gesturing as if you were holding a basketball between your hands is an indicator of confidence and control, as if you almost literally have the facts at your fingertips hands. Steve Jobs frequently used this position in his speeches.

Pyramid hands

When people are nervous, their hands often flit about and fidget. When they’re confident, they are still. One way to accomplish that is to clasp both hands together in a relaxed pyramid. Many business executives employ this gesture, though beware of overuse or pairing it with domineering or arrogant facial expressions. The idea is to show you’re relaxed, not smug.

Wide stance


How people stand is a strong indicator of their mindset. When you stand in this strong and steady position, with your feet about a shoulder width apart, it signals that you feel in control.

Palms up

This gesture indicates openness and honesty. Oprah makes strong use of this during her speeches. She is a powerful, influential figure, but also appears willing to connect sincerely with the people she is speaking to, be it one person or a crowd of thousands.


Palms down

The opposite movement can be viewed positively too—as a sign of strength, authority and assertiveness. Barack Obama has often used it to calm a crowd right after moments of rousing oration.

The next time you give a presentation, try to have it recorded, then review the video with the sound off, watching only your body language. How did you stand and gesture? Did you use any of these positions? If not, think about how you might do so the next time you’re in front of an audience, or even just speaking to your boss or a big client. Practice in front of a mirror, then with friends, until they feel natural.

Non-verbal communication won’t necessarily make or break you as a leader, but it might help you achieve more successful outcomes.

Kasia Wezowski is the founder of the Center for Body Language, the author of four books on the subject, and the producer and director of Leap, a documentary about the coaching profession.


Copyright 2017 Harvard Business Publishing. All Rights Reserved. Additional restrictions may apply including the use of this content as assigned course material. Please consult your institution's librarian about any restrictions that might apply under the license with your institution. For more information and teaching resources from Harvard Business Publishing including Harvard Business School Cases, eLearning products, and business simulations please visit hbsp.harvard.edu.

Attachment 3


How to Give a Data-Heavy Presentation by Alexandra Samuel OCTOBER 16, 2015


Data storytelling has become a powerful part of the communications toolkit, allowing both journalists and marketers to communicate key messages by using data and data visualization to drive articles, blog posts, and reports. But the power of data storytelling isn’t limited to written communication: you can also use data to deliver presentations that are both more credible and more visually compelling.

Knowing how to develop and deliver a data-driven presentation is now a crucial skill for many professionals, since we often have to tell our colleagues a story about the success of a new initiative,


the promise of a new business opportunity, or the imperative of a change in strategy — stories that are much more compelling when they’re backed by numbers.

In the past four years, data has become a bigger and bigger part of my own presentations, since I frequently speak about data-driven projects like the new rules for the collaborative economy, and what social media analytics can’t tell you about your customers. I’ve enjoyed the luxury of working closely with data analysts, infographic designers, and my own in-house speechwriter, which has helped me pick up some tricks on what it takes to create a successful data-driven presentation.

As with any communication, start by thinking about your audience. Who are you presenting to, and how much do they know about the topic? If you’re presenting data on three different sales strategies to the sales team that’s been testing those approaches, you can plunge right in and show them what worked. If you’re reporting on that same experiment to another part of the organization, you need to provide a lot of context before you drop the bar charts in their laps; otherwise what looks like a clear story to you may simply confuse them. A good rule of thumb is to look at the legend on your charts: if you can’t count on the audience knowing what each item in the legend actually refers to, you need to spend some time on setup before you get to the numbers.

It’s easy to let the data overtake your presentation, so be sure you know the overall story you’re trying to tell, and use charts sparingly to support your story. You’re not trying to subdue your enemy through the sheer volume of data you can bring to bear on your argument; you’re using data strategically, when it provides clear and concrete evidence for the story you’re telling. I’ve found that audiences get overwhelmed by back-to-back data slides, so I try to intersperse charts with slides that convey my key point using images or a very few words of text. Show a photo of a shopping cart, and tell people that you now know which cash register displays are most likely to yield impulse purchases; then show the chart displaying the sales figures for different items. Follow that chart with an image or a few short bullets that emphasizes the actionable insights and implications of your data.

It’s rare that anyone will retain all the actual numbers in your presentation, so think about the words that capture the idea, insight, or conclusion you want them to hold onto. Instead of simply throwing up a bar chart that shows levels of employee engagement versus different working arrangements, build to that key chart with a story about the impact of working arrangements on employee satisfaction — illustrated by actual human examples, if possible.

And if there is a single number that really captures your key point — like “employees who work from home 1 day per week are 30% happier than the rest of our workforce” — then make not just that chart, but that specific data point very prominent in your deck. Highlight it in the relevant chart, and consider giving that single data point its own slide or bullet in your conclusion.

As you present, remember that it takes people some time to digest a chart or data table. Take the time to spell out the story you see in the data so that it’s clear to someone who hasn’t been poring over that dataset for the past six weeks. A simple statement like “in every region except the Southwest,


email outperforms phone calls as a way of generating leads; in the Northeast, 5% of emails get a response, versus only 3% of phone calls” will help people understand what they are looking at and how they’re supposed to read your chart. Speak slower than you usually do, and consider pausing for a moment mid-chart, to allow people the time to absorb the data; even if you prefer to wait until the end of your presentation for questions, ask if anyone needs you to clarify the chart.

While clarifying statements are helpful, that doesn’t mean you can neglect the visuals. If all you do is produce your charts with a tool like Infogr.am or Tableau — both of which will produce charts that look a heck of a lot better than what Excel spits out — you’ll immediately improve your data-driven presentations. You may still need to restructure or reformat your charts to make them work on screen, however. Even if you’ve used shading to differentiate between categories in a printed document, it will be easier for people to distinguish between on-screen categories if they’re shown in different, contrasting colors.

Make sure your legend and data labels are printed in a large, visible font; if you’ve used an infographic design tool like Infogr.am, which generates beautiful charts but doesn’t let you adjust font sizes, you’ll have to add your own, larger labels when you’re producing your final deck. (My trick is to create those labels as individual text boxes in the color of my chart columns, so I can drop them on top of my columns and hide the original labels.)

If you don’t have the support of an infographic designer in creating your charts, get familiar with the very basic rules of good data visualization, like which types of charts to use for different purposes. And make sure that you don’t violate any data visualization principles when you squeeze your data onto a slide: if you simply can’t fit your entire chart onto a single slide in a way that is readable, it’s better to show highlights than to compromise the clarity of your data. A column chart that shows eight categories of clustered columns is going to be very crowded — but don’t you dare turn it into a set of pie charts instead: if there’s value in comparing categories, you want to keep that as a bar chart where all the categories are aligned on a single base line. Perhaps you don’t really need to show all eight categories — just the five most important ones. Or maybe you will have to break your chart into two successive slides; if so, organize your categories so that the most closely related categories are kept together on each slide.

Lastly, there is a lot of value in leaving people with a physical (or virtual) copy of your charts, so that they can look at the numbers more closely after your presentation. Since data-driven decks and reports tend to get circulated, make sure that any charts you include can stand on their own, without you speaking to them: note the source of your data, make your legend clear, and annotate your charts with callouts that show people how to make sense of a specific data point (“7 in 10 customers chose the blue package”).

If this is starting to sound daunting, don’t let a vision of the ideal data-driven presentation report keep you from using data to present your work or ideas to colleagues or peers. With data storytelling,


excessively high standards can keep us from seizing the opportunity to make a good story better by backing it up with quantitative evidence.

You don’t have to have the perfect dataset or the world’s most beautiful infographics to make data storytelling a valuable part of your communications toolbox. All you need is to break down the wall that keeps math in one part of your brain, and storytelling in another.

Alexandra Samuel is a speaker, researcher and writer who works with the world’s leading companies to understand their online customers and craft data-driven reports like Sharing is the New Buying. The author of Work Smarter with Social Media (Harvard Business Review Press, 2015), Alex holds a Ph.D. in Political Science from Harvard University. Follow Alex on Twitter as @awsamuel.


Copyright 2015 Harvard Business Publishing. All Rights Reserved. Additional restrictions may apply including the use of this content as assigned course material. Please consult your institution's librarian about any restrictions that might apply under the license with your institution. For more information and teaching resources from Harvard Business Publishing including Harvard Business School Cases, eLearning products, and business simulations please visit hbsp.harvard.edu.