This post, by guest blogger Daniel Saniski, is the eighteenth in a slightly-more-than-a-month-long series on the impressive diversity of participatory decision-making tools that communities can use for land use plans, transportation plans, sustainability plans, or any other type of community plan. Our guest bloggers are covering the gamut, from low-tech to high-tech, web-based to tactile, art-based to those based on scenario planning tools, and more. Daniel’s post explores the challenges and importance of unpacking complex quantitative data using unemployment statistics as an illustration. We welcome your feedback and would love to hear about the participatory design strategies that you’ve found to be the most useful.
“Unemployment is down!”
“Imports are up!”
“The price of coffee skyrocketed last month!”
News headlines scream data points at us each day assuming we understand their meaning, source, and context. Although we see the same greatest hits of data each month (unemployment rate, inflation, job openings, GDP, imports/exports, etc.), many people do not realize much of this data is available not just at a federal and state level, but is available for their town. The sheer quantity is sure to induce information overload and it takes great care to find exactly the right points. Local and comparison data from other cities, states, or a federal average can and should be used in community decision-making, but it is a bit of a challenge wrangling data without misleading people. Graphs provide enormous rhetorical power and should keep near the question at hand. Given the terabytes of possible data series we can explore, today we will explore some ways to frame and contextualize one metric: unemployment.
The Bureau of Labor Statistics tracks data regarding employment, prices, and consumer spending habits and they have well over 10,000 data series ranging from standard headline numbers to narrow measures like the prices of intercity bus and trains. Most of their major data sources contain federal, state, regional, and city/metro sub-series which can be used to provide endless curation opportunities. Using their unemployment data alone we can produce a number of discussions.
Consider the dramatic contrast of the unemployment rates of California and North Dakota. North Dakota has an unemployment rate of 3.1% while California clocks in at 10.9%. Why? Theories range from North Dakota’s use of a state bank to their extensive oil reserves. The answer, to keep correlation from causation, does not matter as much as the framing of the visual question. Seeing such a great disparity in North Dakota prompts a lot of compelling civic questions and can be easily used to start a discussion, although this is still a narrow context. In order to better inform the unemployment discussion, we need more numbers—some of which are equally dramatic.
The newspaper headline unemployment rate and the “real” one are often pretty far apart. The headline rate measures people actively participating in the unemployment system (i.e. on benefits, etc.), but not people who have dropped out of the formal economy or work less than they’d like. The broadest unemployment rate, the U-6 or “Total unemployed, plus all marginally attached workers plus total employed part time for economic reasons” measures all people on the employment fringes and is over 14% compared to the narrowest measure at 8.2%. Now we have some sense of measurement disparities, but these numbers do not tell the whole story.
One must look at the Labor Force Participation Rate and the Civilian Employment-Population Ratio as well. These two figures tell you the rate that people are participating in the economy. If the unemployment rate drops and these measures drop, then one of three things probably contributed: a whole lot of people retired, went to prison, or dropped out of the labor force. Dropping out means their unemployment insurance ran out and they are no longer part of the 8.2%, but if they’re still looking for jobs they’re part of the 14%. When trying to make sense of unemployment’s ups and downs and how they might affect your town, keep these in mind to make better decisions.
Bringing this closer to PlaceMatters, following is some data about Denver which we will unpack in a minute. Local unemployment has more lag than national numbers. Denver’s unemployment rate was 9.2% as of January, which is .9% higher than January’s national average. Payroll in Denver grew from 1990-2000, but has been essentially flat since then though the unemployment rate changed drastically. Third, the percentage of government employees in Denver has been stable at about 14% since at least 1990.
From these figures we gain some interesting insight. First and foremost, the unemployment in Denver should be addressed, as it is well above average. But we would be remiss to blame it on the crash of 2008. Why? Payroll in Denver has not substantially changed in more than ten years. How very odd—who are these unemployed people? The city’s population has expanded dramatically since 1990. There is a serious discussion to be had in Denver since people keep coming, aren’t getting jobs, and haven’t been for a decade.
As a final example, the last graph on this page shows the percentage of people working for the government in Denver. In an age where claims about “bloated” government size, we can show with some quick calculations that these arguments are untrue (in Denver). Taking some time to dig into employment statistics can help to track where your city has been, where it is, and where it is going.
By adding context at a federal, state, and/or local level we can gain greater understanding and start asking better questions—and framing the questions we do ask—with data. As seen when comparing the two big unemployment measures and their supporting participation rates, it takes more than one number, carefully curated, to successfully and fairly employ data graphs. Using these and other contextualized figures, we can help take government data out of the headlines and into our civic discussions where they belong.
For more information, see also:
This post was contributed by Daniel Saniski, the managing editor at Data360.org and an associate consultant at Webster Pacific LLC. He catalogs, writes news about government data, and guides site development for Data360 and provides business intelligence and information systems design services at Webster Pacific LLC.