Social Science


Connecting Social Problems and Popular Culture


Connecting Social Problems and

Popular Culture SECOND EDITION


Karen Sternheimer University of Southern California

A Member of the Perseus Books Group


Westview Press was founded in 1975 in Boulder, Colorado, by notable publisher and intellectual Fred Praeger. Westview Press continues to publish scholarly titles and high- quality undergraduate- and graduate-level textbooks in core social science disciplines. With books developed, written, and edited with the needs of serious nonfiction readers, professors, and students in mind, Westview Press honors its long history of publishing books that matter.

Copyright © 2013 by Karen Sternheimer

Published by Westview Press, A Member of the Perseus Books Group

All rights reserved. No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information, address Westview Press, 2465 Central Avenue, Boulder, CO 80301.

Find us on the World Wide Web at

Every effort has been made to secure required permissions for all text, images, maps, and other art reprinted in this volume.

Westview Press books are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail

Library of Congress Cataloging-in-Publication Data Sternheimer, Karen. Connecting social problems and popular culture : why media is not the answer / Karen Sternheimer. —2nd ed. p. cm. Includes bibliographical references and index. ISBN 978-0-8133-4724-0 (e-book) 1. Mass media—Moral and ethical aspects—United States. 2. Popular culture—Moral and ethical aspects—United States. 3. Mass media and culture—United States. 4. Social problems—United States. I. Title. HN90.M3S75 2013 302.2301—dc23


10 9 8 7 6 5 4 3 2 1

For Frieda Fettner, whose wisdom and encouragement

will be with me always




1 Media Phobia: Why Blaming Pop Culture for Social Problems Is a Problem

2 Is Popular Culture Really Ruining Childhood?

3 Does Social Networking Kill? Cyberbullying, Homophobia, and Suicide

4 What’s Dumbing Down America: Media Zombies or Educational Disparities?

5 From Screen to Crime Scene: Media Violence and Real Violence

6 Pop Culture Promiscuity: Sexualized Images and Reality

7 Changing Families: As Seen on TV?

8 Media Health Hazards? Beauty Image, Obesity, and Eating Disorders

9 Does Pop Culture Promote Smoking, Toking, and Drinking?

10 Consumption and Materialism: A New Generation of Greed?

11 Beyond Popular Culture: Why Inequality Is the Problem

Selected Bibliography Index



Rather than viewing popular culture as “guilty” or “innocent,” the central theme running through Connecting Social Problems and Popular Culture is that various media and the popular culture they promote and produce are reflections of deeper structural conditions—such as poverty, racism, sexism, and homophobia—and economic disparities woven into major social institutions. While discussions of sexism in various forms of media, for instance, are often lively and provocative, the representations themselves are not the core reason that gender inequality continues to exist. Media images bring it to our attention and may further normalize sexism for us, but our examination of our society should not end with media.

In order to understand social problems, we need to look beyond media as a prime causal factor. Media may be a good entry point for thinking about how social problems have a basis beyond the sole individual. But while that premise can open the discussion, this book aims to help students and other readers take the next step in understanding social problems. We must look deeper than popular culture—we need to look at the structural roots to understand issues such as bullying, violence, suicide, teen sex and pregnancy, divorce, substance use, materialism, and educational failure.

Neither media nor popular culture stands still for very long—making the study of both a never-ending endeavor. In this second edition of Connecting Social Problems and Popular Culture, I include a new chapter on fears about social networking and electronic harassment. With concerns about bullying and “sexting” leading to suicide after news accounts of high-profile cases, it is important to uncover what we know about the role that new media play in such incidents. Perhaps not surprisingly, social networking is less of a culprit than an attention getter. Additionally, each chapter has been updated to incorporate, where applicable, new research and trend data on crime, pregnancy, birth- and divorce rates, substance use, and other social issues for which popular culture is so often blamed.

The “link” between video games and actual violence is always a topic of interest for readers and lay theorists of social problems. In 2011 the US Supreme Court upheld a lower court’s ruling that states cannot limit the purchase of violent video games. In handing down this major decision, the Supreme Court decided that California had not proven how actual harm came from playing video games. I address this ruling in greater detail in Chapter 5 on media and violence.


Because popular culture is so ubiquitous—and, frankly, fun—it is a great window for students in a variety of courses to look through as they begin exploring social issues. Students in introductory sociology and media studies courses and social problems and social issues classes, as well as those studying inequality, will be able to make connections between the material and the many common beliefs about media’s effects on society that this book addresses.

By challenging the conventional wisdom about what the media “does” to its consumers—especially those considered less capable than their critics—readers can begin to think critically about the ways in which social issues are framed and how sensationalized news accounts help shape our thinking about the causes of societal problems. Beyond simply debunking common beliefs, this second edition stresses the importance of social structure and provides an introduction to structural explanations for the issues commonly blamed on popular culture. By digging deeper beyond simple cultural arguments, readers learn how policy decisions and economic shifts are important explanatory factors for many issues blamed on media.

Each chapter begins with examples from pop culture that many readers will already be familiar with, taken from celebrity gossip and controversial television shows like Teen Mom, high-profile news stories, and other easily accessible accounts. Additionally, each chapter introduces findings from recent research, often breaking down the components of the sampling and methods for readers to better understand how research is conducted and how to think critically about the results presented in the press. Where applicable, each chapter includes supporting data— and in some cases graphs—from federal sources, such as the census, Federal Bureau of Investigation, and Centers for Disease Control and Prevention, to provide evidence of long-term trends, often challenging misperceptions about particular issues. Because these sources are easily accessed online (and URLs are included in notes at the end of chapters), readers can learn to spot-check popular claims about these issues on their own in the future.

The evolution of this book, across its editions, has truly been a team effort. Thanks to Alex Masulis, my first editor at Westview Press, to Evan Carver who, early on, championed the second edition, and to Leanne Silverman, who helped bring the book in your hands to print.

I am also very thankful for my student researchers who helped find articles for this book. William Rice, Jessica Sackman, and Mishirika Scott assisted with the first edition, and Kimberly Blears helped with the revised edition. They and many other undergraduate students at the University of Southern California have been a pleasure to work with; their input in my classes helps keep me grounded in youth culture as time takes me further away from being anywhere near pop culture’s


cutting edge. Several anonymous reviewers provided useful comments and suggestions, and I thank them for helping make this book stronger. For their helpful criticisms and invaluable suggestions, I also want to thank David Briscoe, Joshua Gamson, Kelly James, Marcia Maurycy, Janet McMullen, and Markella Rutherford.

The Department of Sociology at the University of Southern California has been my professional home for many years, and I could not have written this book without years of the department’s enthusiastic support. I am grateful for the many graduate and undergraduate students with whom I have shared countless hours of thought-provoking discussions. Special thanks to Mike Messner, Barry Glassner, Sally Raskoff, Elaine Bell Kaplan, Karl Bakeman, and Eileen Connell for their continued support of me and my work. And most of all, thanks to my family, without whom none of this would be possible. A special thanks to my parents and sisters for their continued support, and for Eli and Julian, who are introducing me to a new generation’s pop culture.




Media Phobia Why Blaming Pop Culture for Social Problems Is a Problem

“They’re here!” Carol Anne exclaims in the 1982 film Poltergeist. “Who’s here?” her mother asks. “The TV people!” answers the wide-eyed blonde girl, mesmerized by the “snow” on the family’s television set. What follows is a family’s sci-fi nightmare: Carol Anne is taken away by the angry spirits terrorizing their home. Her only means of communication to her family is through the television set.

This film’s plot serves as a powerful example of American anxieties about media culture. The angelic child is helpless against its pull and is ultimately stolen, absorbed into its vast netherworld. She is the family’s most vulnerable victim, and as such is drawn into evil without recognizing its danger. Carol Anne’s fate highlights the fear of what television in particular and popular culture more generally may “do to” children: take them someplace dangerous and beyond their parents’ reach. Ultimately, Carol Anne is saved with the help of a medium, but the imagery in the film reflects the terror that children are somehow prey to outsiders who come into unsuspecting homes via the TV set.

Thirty years later, media culture has expanded well beyond television; unlike in Carol Anne’s day, kids today use social networking, smartphones, iPods, the Internet, video games, and other technology that their parents may not even know how to use. Cable television was in its infancy in 1982: MTV was one year old, CNN was two. Today there are hundreds of channels, with thousands more programs available on demand at any time. Unlike in 1982, television stations no longer sign off at night. Our media culture does not rest. What does this mean for young people today, and our future?

Much of the anxiety surrounding popular culture focuses on children, who are often perceived as easily influenced by media images. The fear that popular culture leads young people to engage in problematic behavior, culminating in large-scale social problems, sometimes leads the general public to blame media for a host of troubling conditions.

For many people, this explosion of media over the past decades brings worry that, for instance, kids are so distracted by new technology that they don’t study as much. Are they crueler to one another now, thanks to social networking? Does our entertainment culture mean kids expect constant entertainment? Do kids know too much about sex, thanks to the Internet? Does violent content in video games, movies, and television make kids violent? Promiscuous? Materialistic? Overweight? Anorexic? More likely to smoke, drink, or take drugs?


This book seeks to address these questions, first by examining the research that attempts to connect these issues to popular culture. Despite the commonsense view that media must be at least partly to blame for these issues, the evidence suggests that there are many more important factors that create serious problems in the United States today. Popular culture gets a lot of attention, but it is rarely a central causal factor. Throughout the book, we will also take a step back and think about exactly why it is that so many people fear the effects of popular culture.

You might have noticed that all of the questions posed above focus on young people’s relationship with media and leave most adults out of the equation. As we will see, a great deal of our concern about media and media’s potential effects on kids has more to do with uncertainty about the future and the changing experiences of childhood and adolescence. In addition to considering why we are concerned about the impact of popular culture, this book also explores why many researchers and politicians encourage us to remain afraid of media culture and of kids themselves. Of course, popular culture has an impact on everyone’s life, regardless of age. But this impact is less central in causing problems than factors like inequality, which we will explore throughout the book.

The Big Picture: Poverty, Not Pop Culture

Blaming media for changes in childhood and for causing social problems has shifted the public conversation away from addressing the biggest issues that impact children’s lives. The most pressing crisis American children face today is not media culture but poverty. In 2011—the most recent year for which data are available—more than 16 million children (just under 22 percent of Americans under eighteen) lived in poverty, a rate two to three times higher than that in other industrialized nations. Reduced funding for families in poverty has only exacerbated this problem, as we now see the effects of the 1996 welfare reform legislation that has gradually taken away the safety net from children. Additionally, our two-tiered health care system often prevents poor children from receiving basic health care, as just over 9 percent of American children had no health insurance in 2011.1 These are often children with parents who work at jobs that offer no benefits.

These same children are admonished to stay in school to break the cycle of poverty, yet many of them attend schools without enough books or basic school supplies. Schools in high-poverty areas are more likely to have uncertified teachers; for instance, 70 percent of seventh through twelfth graders in such schools are taught science by teachers without science backgrounds.2 We worry about kids being in danger at school but forget that the most perilous place, statistically speaking, is in their own homes. In 2010, for instance, 915 children were killed by


their parents, compared with 17 killed at school during the 2009–2010 school year.3 By continually hyping the fear of media-made child killers, we forget that the biggest threats to childhood are adults and the policies adults create.

As we will see throughout this book, many of the problems that we tend to lay at the feet of popular culture have more mundane causes. At the root of the most serious challenges American children face, problems like lack of a quality education, violent victimization, early pregnancies, single parenthood, and obesity, poverty plays a starring role; popular culture is a bit player at best. And other issues that this book addresses, such as materialism, substance use, racism, sexism, and homophobia, might be highly visible in popular culture, but it is the adults around young people, as well as the way in which American society is structured, that contribute the most to these issues. These issues are made most visible in popular culture, but their causes are more complex. We will examine these causes in the chapters that follow.

The media have come to symbolize society and provide glimpses of both social changes and social problems. Changes in media culture and media technologies are easier to see than the complex host of economic, political, and social changes Americans have experienced in the past few decades. Graphic video games are easier to see than changes in public policies, which we hear little about, even though they better explain why violence happens and where it happens. We may criticize celebrity single mothers because it is difficult to explore the real and complex situations that impact people’s choices and behavior. What lies behind our fear of media culture is anxiety about an uncertain future. This fear has been deflected onto children, symbolic of the future, and onto media, symbolic of contemporary society.

In addition to geopolitical changes, we have experienced economic shifts over the past few decades, such as the increased necessity for two incomes to sustain middle-class status, which has reshaped family life. Increased opportunities for women have created greater independence, making marriage less of a necessity for economic survival. Deindustrialization and the rise of an information-based economy have left the poorest and least-skilled workers behind and eroded job security for many members of the middle class. Ultimately, these economic changes have made supervision of children more of a challenge for adults, who are now working longer hours.

Since the Industrial Revolution, our economy has become more complex, and adults and children have increasingly spent their days separated from one another. From a time when adults and children worked together on family farms to the development of institutions specifically for children, like age-segregated schools, day care, and organized after-school activities, daily interaction in American society has become more separated by age. Popular culture is another experience that kids may enjoy beyond adult supervision. An increase of youth autonomy has


created fear within adults, who worry that violence, promiscuity, and other forms of “adult” behavior will emerge from these shifts and that parents will have a declining level of influence on their children. Kids spend more time with friends than with their parents as they get older, and more time with popular culture, too. These changes explain in large part why children’s experiences are different now than in the past, but are not just the result of changes in popular culture.

A Brief History of Media Fears

Fear that popular culture has a negative impact on youth is nothing new: it is a recurring theme in history. Whereas in the past, fears about youth were largely confined to children of the working class, immigrants, or racial minorities, fear of young people now appears to be a more generalized fear of the future, which explains why we have brought middle-class and affluent youth into the spectrum of worry. Like our predecessors, we are afraid of change, of popular culture we don’t like or understand, and of a shifting world that at times feels out of control.

Fears about media and children date back at least to Plato, who was concerned about the effects that the classic Greek tragedies had on children.4 Historian John Springhall describes how penny theaters and cheap novels in early-nineteenth- century England were thought to create moral decay among working-class boys.5 Attending the theater or reading a book would hardly raise an eyebrow today, but Springhall explains that the concern emerged following an increase in working- class youths’ leisure time.

As in contemporary times, commentators blamed youth for a rise in crime and considered any gathering place of working-class youth threatening. Young people could afford admission only to penny theaters, which featured entertainment geared toward a working-class audience, rather than the “respectable” theaters catering to middle- or upper-class patrons. Complaints about the performances were very similar to those today: youngsters would learn the wrong values and possibly become criminals. Penny and later dime novels garnered similar reaction, accused of being tawdry in content and filled with slang that kids might imitate. Springhall concludes that the concern had less to do with actual content and more to do with the growing literacy of the working class, shifting the balance of power from elites to the masses and threatening the status quo.

Examining the social context enables us to understand what creates underlying anxieties about media. Fear of comic books in the 1940s and 1950s, for instance, took place in the McCarthy era, when the control over culture was high on the national agenda. Like the dime novels before, comic books were cheap, were based on adventurous tales, and appealed to the masses. Colorful and graphic depictions of violence riled critics, who lobbied Congress unsuccessfully to place restrictions


on comics’ sale and production.6 Psychiatrist and author Frederic Wertham wrote in 1953 that “chronic stimulation … by comic books [is a] contributing [factor] to many children’s maladjustment.”7 He and others believed that comics were a major cause of violent behavior, ignoring the possibility that violence in postwar suburban America could be caused by anything but the reading material of choice for many young boys. Others considered pinball machines a bad influence; the city of New York even banned pinball from 1942 to 1976 as a game of chance that allegedly encouraged youth gambling.

During the middle of the twentieth century, music routinely appeared on the public-enemy list. Historian Grace Palladino recounts concerns about swing music in the early 1940s. Adults feared that kids wasted so much time listening to it that they could never become decent soldiers in World War II (sixty years later Tom Brokaw dubbed these same would-be delinquents “the greatest generation”).8 Palladino contends that adult anxieties stemmed from the growing separation between “teenagers,” a term market researchers coined in 1941, and the older generation in both leisure time and cultural tastes. Just a few years later, similar concerns arose when Elvis Presley brought traditionally African American music to white middle America. His hips weren’t really the problem; it was the threat of bringing traditionally black music to white middle-class teens during a time of enforced and de facto segregation.

Later, concerns about satanic messages allegedly heard when listeners played vinyl albums backward and panic over Prince’s “1999” lyrics about masturbation in the 1980s led to the formation of Tipper Gore’s Parents Music Resource Center, Senate hearings, and parental warning labels. Both stem from parents’ discomfort with their children’s cultural preferences and the desire to increase their ability to control what their children know. Today, fears of media culture stem from the decreased ability to control content and consumption. While attending the theater or reading newspapers or novels elicits little public concern today, fears have shifted to newer forms of cultural expression like smart-phones, social media, video games, and the Internet. Throughout the twentieth century, popular culture became something increasingly consumed privately. Before the invention of radio and television, popular culture was more public, and controlling the information young people were exposed to was somewhat easier. Fears surrounding newer media have largely been based on the reduced ability of adults to control children’s access. Smartphones and near-constant Internet access make it practically impossible for adults to seal off the walls of childhood from the rest of society.

These recurring concerns about popular culture are examples of what sociologist Stanley Cohen refers to as “moral panics,” fears that are very real but also out of proportion to their actual threat.9 Underneath the fear lies the belief that our way of life is at stake, threatened by evildoers—often cast as popular culture or


its young consumers—who must be controlled. The rhetoric typically takes on a shrill and angry tone, joined by people nominated as experts to attest to the danger of what might happen unless we rein in the troublemakers. Cohen calls those blamed for the crisis “folk devils,” the people or things that seem to embody everything that is wrong with society today. Typically, moral panics attempt to redefine the public’s understanding of deviance, recasting the folk devils as threats in need of restraint.

Moral panics typically have a triggering event that gathers signifi-cant media attention, much like the Columbine High School shootings in Littleton, Colorado, did in 1999. The tragic murder of twelve students and a teacher shocked the nation, who could view nonstop live coverage of the event on a variety of news networks. Drawing on previous concerns about youth violence and popular culture, a panic began surrounding video games, music, and the use of the Internet to post threats and gather information about carrying out similar attacks. In the aftermath, commentators linked the perpetrators’ pop culture preferences to their actions, suggesting that it was highly predictable that violent music and video games would lead to actual violence. This panic cast both teens and violent media as folk devils, claiming that both were a threat to public safety.

Panics about popular culture often mask attempts to condemn the tastes and cultural preferences of less powerful social groups. Popular culture has always been viewed as less valuable than “high culture,” the stuff that is supposed to make you more refined, like going to the ballet, the opera, or the symphony. Throughout history people have been ready to believe the worst about the “low culture” of the common folk, just as bowling, wrestling, and monster truck rallies often bear the brunt of put-downs today. It’s more socially acceptable to make fun of something working-class people might enjoy than to appear snobby and insensitive by criticizing people for their economic status.

The same is true of criticizing rap music rather than African Americans directly. Sociologist Bethany Bryson analyzed data from the General Social Survey, a nationally representative random household survey, and found strong associations between musical intolerance and racial intolerance. She notes that “people use cultural taste to reinforce symbolic boundaries between themselves and categories of people they dislike. Thus, music is used as a symbolic dividing line that aligns people with some and apart from others.” Bryson also observed a correlation between dislike of certain groups and the music associated with that group.10 So for many people, rap becomes a polite proxy for criticizing African Americans without appearing overtly racist.

Africana studies professor Tricia Rose writes that the discourse surrounding rap is a way to further construct African Americans “as a dangerous internal element in urban America—an element that if allowed to roam about freely will threaten the


social order.”11 She goes on to describe how rap concerts have been portrayed as bastions of violence in order to justify greater restrictions on black youth from public spaces. Likewise, sociologist Amy Binder studied more than one hundred news stories about gangsta rap and found that heavy metal is feared for being potentially dangerous for individual listeners, but rap’s critics have focused on its alleged danger to society as a whole.12

Popular culture often creates power struggles. Every new medium creates new freedom for some, more desire to control for others. For instance, although the printing press was regarded as one of the greatest inventions of the second millennium, it also destabilized the power of the church when literacy became more widespread and people could read the Bible themselves. Later, the availability of cheap newspapers and novels reduced the ability of the upper class to control popular culture created specifically for the working class. Fears of media today reflect a similar power struggle, although now the elites are adults who fear losing control of what their children know, what their children like, and who their children are.

Constructing Media Phobia

Ironically, we are encouraged to fear media by the news media itself, because doomsday warnings sell papers, attract viewers, and keep us so scared we stay glued to the news for updates. “TV is leading children down a moral sewer!” the late entertainer Steve Allen claimed in several full-page ads in the Los Angeles Times. Other headlines seem to concur: “Teens’ Web Is a Wild West,” warned the Orange County Register. The New York Times wrote of the dangers of “video games and the depressed teenager.” “Health Groups Link Hollywood Fare to Youth Violence,” announced the front page of the Los Angeles Times.13 These and hundreds of other stories nationwide imply that the media are a threat to children and, more ominously, that children are subsequently a threat to the rest of us.

The news media are central within American public thought, maybe not telling us what to think, but, to borrow a popular phrase, focusing our attention on what to think about. Known as agenda-setting theory, this idea suggests that the repetition of issues in the news shapes what the public believes is most important.14 The abundance of news stories similar to the ones listed above directs us to think about entertainment as public enemy number one for kids in particular. Whether the stories are about popular culture causing young people to commit acts of violence or to become sexually active, depressed, or addicted, stories about the alleged danger of popular culture help us make seemingly easy connections between media and social problems. Although not everyone who hears about these stories agrees


that there is a cause-effect relationship, the repeated focus on media effects keeps the debate alive and the attention away from other potential causes of troubling conditions.

Problems do not emerge fully formed; they need to be created in order to claim the status as important and worthy of our attention and concern. In their 1977 book, Constructing Social Problems, sociologists John Kitsuse and Malcolm Spector argue that social problems are the result of the work of claims makers, people who actively work to raise awareness and define an issue as a significant problem. This is not to suggest that problems don’t really exist, only that to rise to the level of a social problem, issues need to have people who lobby for greater attention to any given topic.

The constructionist approach to social problems requires us to look closely not just at the issue of concern, but also at how we have come to think of it as a problem and—equally important—who wants us to view it as such. The popular culture problem is one example, created by a variety of people, including academics who do research testing only for negative effects and provide commentary attesting to its alleged harm; activist groups that seek to raise public awareness about pop culture’s supposed threat; and, as noted earlier, the news organizations that report on these claims. Politicians also campaign against popular culture, hold hearings, and propose legislation to appear to be doing something about the pop culture problem. Author Cynthia Cooper analyzed nearly thirty congressional hearings held on this issue, finding them to be little more than an exercise in public relations for the elected officials, yet hearings add to the appearance of a weighty problem in need of federal intervention. These claims makers do not simply raise awareness in response to a problem; their actions help create our sense that problems exist in the first place. Claims makers also shape the way we think about an issue and frequently “distort the nature of a problem,” as sociologist Joel Best details in his analysis of crime news.15 He acknowledges that claims makers might not do this on purpose and often have good intentions. After all, if people see what they believe to be a serious problem, raising awareness makes sense.

For example, consider the surgeon general’s report on youth violence, released in January 2001. This report indicated that poverty and family violence are the best predictors of youth violence. Nonetheless, the report concludes, “Exposure to violent media plays an important causal role,” based on research that is highly criticized by many media studies scholars.16 Newspapers capitalized on this single statement, running stories with the headlines “Surgeon General Links TV, Real Violence” and “Media Dodges Violence Bullet.”17 Even when studies point to other central causal factors, media violence often dominates the story—even in Hollywood.


You might be wondering what the harm could be in conducting research, holding hearings, and reporting on this issue. After all, media culture is very pervasive, and if it could be even a minor issue, shouldn’t we pay attention to it?

There is danger, however, in taking our attention away from other potentially more serious issues. The pop culture answer diverts us from delving into the other questions. Focusing on the media only in a cause-and-effect manner fails to help us understand the connection between media culture as a form of commerce created in a particular economic context. The quest to get the biggest box-office opening or Nielsen ratings leads to lowest-common-denominator storytelling, which explains the overuse of sex and violence as plot devices. Profit, not critical acclaim, equals success in Hollywood (and on Wall Street). Sex and violence create fascination and are sold in popular culture like commodities to attract our attention, if only for a little while.

Most ominously, the effects question crowds out other vital issues affecting the well-being and future of young people. These issues play out more quietly on a daily basis and lie hidden underneath the more dramatic fear-factor-type headlines. Sociologist Barry Glassner, author of The Culture of Fear, refers to this as social sleight of hand, a magician’s trick that keeps us focused on one hand while the other actually does the work, encouraging us to think of a trick as real magic. He warns that these diversions encourage us to fear the wrong things, while the real roots of problems go unexamined and often don’t rise in public awareness.

It’s not surprising that we have a difficult time looking beyond popular culture as an explanation for social problems. As a nation rooted in the ethos of individualism, Americans tend to understand troubling conditions as the result of poor personal choices. Certainly, these choices play a role, but we often fail to understand the contexts in which people make such choices.

Social structure is the sociological concept that gives us information about these contexts. For instance, social structure encourages us to look in depth at the big picture to understand what factors may shape people’s choices. Looking carefully at patterns of arrangements within our economic system, at inequality in terms of race, gender, sexual orientation, and socioeconomic status, will help us understand why, for instance, some people might be more prone to bully, to commit violence, to become pregnant as a teen, or to drop out of school.

For example, many critics of rap music have argued that some of the lyrics are extremely misogynistic, encouraging young listeners to devalue women. While disturbing lyrics get our attention, sociologists Terri M. Adams and Douglas B. Fuller argue that rap is just a continuation of a long history of demonizing women, particularly black women. The “Jezebel” myth (the modern-day “ho”) of the hypersexual woman who uses her wiles to manipulate men dates back to slavery and served as an excuse for white men to violate African American women. Similarly, the “Mammy” myth (today’s “bitch”) also has roots in slavery as the


bossy woman who orders black men around while serving her white masters. In more contemporary times, politicians have used these characterizations to

blame women for urban poverty: Ronald Reagan’s 1980s-era “welfare queen” who allegedly can’t stop having babies and Senator Daniel Patrick Moynihan’s emasculating matriarch of the 1960s, supposedly destroying the African American family with her strength.18 Whereas politicians may use more genteel language, the outcome of reduced funding for children in poverty carries far more potential destructiveness than the prolific use of profanity in rap. In fact, part of the insidiousness of sexism lies in the use of language to cover and obfuscate its continued importance in American life. The realities of discrimination and violence against women are less sensational than rap’s in-your-face lyrics, but they are still with us.

For example, the National Crime Victimization Survey (NCVS), a nationally representative survey conducted by the Department of Justice each year, stated in 2010 that 169,370 American women and girls over twelve reported being raped or sexually assaulted, a rate of .7 per 1,000. Intimate-partner violence accounted for 22 percent of nonfatal violence against women.19 This is partly because females are generally less likely to be victims of violence than males are, but it also highlights the dangers women often face from those closest to them.

Structural factors are often difficult to see for those not trained to think sociologically. It is often difficult to see how policies enacted decades ago might shape patterns of violence or school failure today, but they do. Social structure involves connecting the dots between the past and present, between large-scale social institutions and individual choices. One of the central goals of this book is to help readers understand that there are many structural factors that can help us understand the many problems that popular culture is often blamed for causing.

Not only is this an issue that politicians can use to connect with middle-class voters, but researchers can get funding from a host of sources to continue to seek negative media effects. David L. Altheide, sociologist and author of Creating Fear: News and the Construction of Crisis, suggests that fear-based news helps support the status quo, justifies further social control, and encourages us to look for punitive solutions to perceived problems. Meanwhile, more significant causes of American social problems fall by the wayside.

Deconstructing Media Phobia

This book uses the constructionist approach to understand how claims makers blame popular culture for causing social problems. This does not mean that all problems are just invented crises, nor does it mean that popular culture is all benign entertainment and should not be crucially analyzed. Within each chapter, we


will examine the structural roots of the various issues that tend not to attract the massive attention or news coverage that popular culture does. Issues such as the persistence of poverty, unequal access to quality education, reduced information about birth control, overall disparities in opportunity, and the continued presence of racial and gender inequality explain many of the problems we hear blamed on popular culture.

Understanding moral panics about popular culture involves both addressing how the fear is constructed as well as why the fear is out of proportion, requiring us to include objective evidence. Throughout this book, we will examine data and trends within each chapter to see that many of the problems attributed to popular culture are not necessarily getting worse. Sometimes the problems are very serious (such as violence and educational disparities), and an emphasis on media serves to trivialize them. Studies purporting to find evidence of media culpability are often profoundly flawed or overstate their findings. Since research methodology can be complex and dry, the public almost never hears how researchers actually conducted the studies that are discussed in the news. We will do that here, and in the process you will see that some of the research we hear so much about has serious shortcomings.

In the following chapters, we will consider claims that popular culture promotes educational failure, online bullying, violence, promiscuity, single parenthood, materialism, obesity and eating disorders, drinking, drug use, and smoking, as well as racism, sexism, and homophobia. These are important and often misunderstood issues that merit further exploration.

Media culture may not be the root cause of American social problems, but it is more than simply benign entertainment. The purpose of this book is not to simply exonerate media culture as inconsequential: I contend that media culture is a prime starting point for social criticism, but our look at social problems should never end with the media. Pointing out the real issues we should be concerned about does not absolve the entertainment industry of its excesses and mediocrity, particularly the news media, which often heighten our fears while providing little context or analysis. Fear is a powerful force, especially when children seem to be potential victims, so it is understandable that the public would be concerned about our ubiquitous media culture. However compelling news reports are, with attention- grabbing visuals and the constant competition for our interest, the fear that media are a central threat to children and the future of America is a tempting explanation, but at best, it is misguided.

This fear of media was not invented out of thin air, nor is it fanned only by news stories suggesting media culture is dangerous. There is a parallel groundswell of public concern about the larger role of media culture in contemporary American society. Let’s face it: a lot of media culture is highly sexualized, is filled with violence, and seems to appeal to our basest interests, and some people do use


social networking to be incredibly rude and abusive. The media act as a refracted social mirror, providing us with insights about

major social issues such as race, gender, class, and the power and patterns of inequality. The media are an intricate element of our culture, woven into the fabric of social life. For example, many people rightly criticize the highly sexualized images of women in popular culture, the limited representations of people of color on television, and the brutality of fantasy violence in movies and video games. These images exist in the context of a society still mired in various forms of inequality, and although in many respects inequality has been reduced, it still exists. Limited or absent representations of the elderly, the plus-sized, the disabled, and other marginalized groups reflect the tendency of mass entertainment to focus on a narrow portrait of American life. Popular culture can be a great starting point to discuss issues of power, privilege, and inequality.

Media Matter

I want to be clear that by arguing that popular culture isn’t the central cause of our biggest problems, I am not saying that media have no impact on American society or that popular culture doesn’t matter. Far from it. Our various forms of media shape our communication with each other and how we spend our time, and we use many forms in constructing our identities. Popular culture shapes what we talk about, how we think of each other, and how we think about ourselves. Media matter, but our relationship to their many forms is more complex and multifaceted than simple cause-effect arguments suggest.

For example, people might use music as a means of forming connections with others at festivals like Burning Man and for navigating emotional challenges of relationships and self-image. A Facebook account is a way to construct a public self and has become a central means of communication for many people. Debates about use of the N word in music lyrics can lead to broader discussions about the word’s history and meaning and the state of racism today.

I also understand why people are concerned about the content of popular culture. Many of us find it to be distasteful at times and wonder what its impact may be. Others don’t like hearing foul language blasting from the stereo of the car next to theirs and cringe when young girls seem to emulate sexy pop stars. Media culture has become very pervasive in the past few decades, and at times it feels like it bombards us—twenty-four-hour news streams, constant texting, and social networking have reshaped our daily lives and interactions. The news media are often guilty of peddling fascination rather than information. This book serves as a critique of the press coverage of social problems and why the “media made them do it” theme continually resurfaces. I understand why critics sometimes argue that


graphic media depictions of sex and violence and the prolific use of profanity debase our culture. Hollywood’s dependence on these tools often represents the failure to tell complex stories and the lack of courage to take artistic (and financial) risks. Rather than just ask Hollywood for self-censorship, we should have more choices, more opportunities for our media culture to engage the complexities of life that the summer blockbusters seldom do. But business as usual often makes this impossible, when a handful of big conglomerates produce the lion’s share of entertainment media and smaller producers have a difficult time getting attention. The 1996 Telecommunications Act, which eased media-ownership restrictions, made it even harder for smaller media outlets to compete with the big conglomerates like Disney, Time-Warner, and Viacom.

That said, I know that sometimes at the end of a long day, I prefer to be distracted and amused rather than informed or inspired. With the threat of terrorism and the lingering fallout from the Great Recession, superficial entertainment serves a purpose. But deflected anxiety doesn’t go away; it just resurfaces elsewhere. And in uncertain times such as our own, it is understandable that our concerns would eventually focus on popular culture that both reminds us of our insecurities and distracts us from them. But understanding the most important issues and their causes can help alleviate anxieties about both popular culture and young people, and help us focus on the roots of troubling issues in order to find solutions. This book aims to do just that.

Notes 1. US Bureau of the Census, Income, Poverty, and Health Insurance Coverage in the

United States: 2011, Report P60, n. 243, Table B-2, 16, 22,

2. Children’s Defense Fund, The State of America’s Children Yearbook, 2002 (Washington, DC: CDF, 2002).

3. US Department of Health and Human Services, Administration on Children, Youth, and Family, Child Maltreatment, 2010 (Washington, DC: Government Printing Office, 2011),; US Department of Education, Indicators of School Crime and Safety: 2011 (Washington, DC: Government Printing Office, 2012),

4. For further discussion of Plato’s concerns, see David Buckingham, After the Death of Childhood: Growing Up in the Age of Electronic Media.

5. John Springhall, Youth, Popular Culture, and Moral Panics: Penny Gaffs to Gangsta-Rap, 1830–1996.

6. For further discussion, see ibid., chap. 5. 7. Frederic Wertham, “Such Trivia as Comic Books.” 8. Grace Palladino, Teenagers: An American History; Tom Brokaw, The Greatest

Generation (New York: Random House, 1998). 9. Stanley Cohen, Folk Devils and Moral Panics.

10. Bethany Bryson, “’Anything but Heavy Metal’: Symbolic Exclusion and Musical Dislikes.”

11. Tricia Rose, “’Fear of a Black Planet’: Rap Music and Black Cultural Politics in the 1990s,” 279.

12. Amy Binder, “Constructing Racial Rhetoric: Media Depictions of Harm in Heavy Metal and Rap Music,” 754.

13. David Whiting, “Teens’ Web Is a Wild West,” Orange County Register, December 14, 2011,; Roni Caryn Rabin, “Video Games and the Depressed Teenager,” New York Times, January 18, 2011,; Marlene Cimons, “Health Groups Link Hollywood Fare to Youth Violence,” Los Angeles Times, December 13, 2000, A34.

14. Maxwell E. McCombs and Donald L. Shaw, “The Agenda-Setting Function of the Mass Media.”

15. Cynthia Cooper, Violence on Television: Congressional Inquiry, Public Criticism, and Industry Response—a Policy Analysis; Joel Best, Random Violence: How We Talk About New Crimes and New Victims, xiii.

16. US Department of Health and Human Services, Youth Violence: A Report of the Surgeon General (Washington, DC: Government Printing Office, 2001). For more discussion of the research on which the statement was based, see Chapter 2.

17. Jeff Leeds, “Surgeon General Links TV, Real Violence,” Los Angeles Times, January 17, 2001, A1; Jesse Hiestand, “Media Dodges Violence Bullet; Poverty, Peers More to Blame,” New York Daily News, January 18, 2001, B1.

18. Terri M. Adams and Douglas B. Fuller, “The Words Have Changed but the Ideology Remains the Same: Misogynistic Lyrics in Rap Music.”

19. Jennifer L. Truman, “Criminal Victimization, 2010,” National Crime Victimization Survey (Washington, DC: US Department of Justice, 2011),



Is Popular Culture Really Ruining Childhood?

“There is reason to believe that childhood is now in crisis,” writes law professor Joel Bakan in a 2011 New York Times op-ed. He lists a number of factors for his concern, beginning with a description of his teenage children, “a million miles away, absorbed by the titillating roil of online social life, the addictive pull of video games and virtual worlds, as they stare endlessly at video clips and digital pictures of themselves and their friends.” He is not alone in the belief that popular culture is at least partly to blame for negatively impacting childhood. “Pop culture is destroying our daughters,” a 2005 Boston Globe story declared, affirming what many parents and critics believe. The article, tellingly titled “Childhood Lost to Pop Culture,” described young girls “walking around with too much of their bodies exposed,” their posteriors visible while sitting in low-rise jeans.1

The concerns are not just in the United States, either. A British newspaper warned readers of children’s “junk culture,” asking whether we have “poisoned childhood” with video games and other kinds of popular culture. A Canadian newspaper asks, “Can the kids be deprogrammed?” noting that “concern is mounting that pop culture may be accountable for a wide range of social and physical problems that begin in childhood and carry through to adulthood.”2

Stories like these reinforce what many people think is obvious: childhood is under siege, and popular culture is the main culprit. From celebrities making questionable life choices to violent video games and explicit websites, there is certainly a deep well of pop culture to draw from in order to find examples of bad behavior that many fear will send the wrong message to kids. But despite the plethora of potential bad influences, pop culture is not changing children and childhood as much as we might fear.

First, we need to examine the meaning of childhood itself. If childhood looks different from what many people presume it should, we need to critically consider what it is “supposed” to be like and how we collectively create the meaning of childhood. Are children’s lives really far from the ideal that pop culture is allegedly destroying?

Second is the presumption that the experience of childhood has changed for the worse. Some people are deeply concerned that children know things that we think they shouldn’t—about sex, violence, alcohol, and drugs. But who decides what children should and shouldn’t know (or when they should know it) and whether knowledge itself is dangerous? Before we convict popular culture, we need to consider whether children and childhood itself have really been damaged.

Finally, if children’s experiences of childhood have changed, we often presume


that popular culture is the main cause. But is it really? In this chapter we will examine these three basic questions about children and

popular culture. As we will see, childhood has not been ruined, nor is it ending earlier than in generations past. Yes, children’s experiences are different now than they were when I was growing up and likely from when you were growing up, too. When I was ten, cable television was just coming out (with only a few dozen channels), VHS and Betamax were starting their battle for household domination, and portable music mostly meant a transistor radio. But there were many other factors—more important factors—shaping the experiences of kids my age than our media consumption, just as there are for kids today.

Americans fear media in part because we are constantly told we should and, more important, because media are the most visible representation of the many changes that have altered the experiences of childhood. Changes in popular culture are much easier to spot than shifts in social structure. In this chapter I address why media are so often considered detrimental to childhood and the primary spoilers of innocence. Instead of media being the true culprit, broader social, political, and economic changes over the past century have made adults uneasy about their ability to control children and the experience of childhood itself. Most centrally, fears about the demise of childhood make us nostalgic for our own lost childhoods. In a way we are longing for our lost selves when we think that childhood and children have been damaged by popular culture. The many moral panics surrounding young people and popular culture stem from misunderstandings about children’s well- being today, and the shifting meanings of childhood itself.

The Meaning(s) of Childhood

What is childhood? This may seem like an obvious question, but its definition is trickier than we might think. For one, Americans don’t even agree on when a child’s life begins—at conception? the second trimester of pregnancy? at birth? Once children are born, the confusion doesn’t end. Many might agree that people under ten can be classified as children, but we will probably not all agree on the sorts of experiences they should have. A religious education? Chores? Responsibility for younger siblings? A job? Underlying these decisions are a variety of basic ideas about what childhood should mean, and these decisions change over both time and place.

If we have trouble defining when childhood begins, we really have difficulty agreeing on when it ends. Is adolescence the cutoff? Age eighteen? Twenty-one? Neither age is really the clear threshold to adulthood; after all, in some states children as young as ten can be tried as adults in criminal court.3 On the other hand, some adults regard college students—many well over eighteen and even over


twenty-one—as kids, not yet in the real world. As a society we have mixed feelings about children and childhood. We all have

different experiences of childhood ourselves. For some of us, this experience might have been fun and seem carefree (at least through the benefit of hindsight). For others, childhood might have been a painful experience, one best left behind. While people’s experiences of childhood are quite varied, when I ask my students to define the term child, they seem to have no trouble finding common adjectives. Words ranging from innocent, good, cute, pure, helpless, and vulnerable to mischievous, impulsive, ignorant, and selfish come up year after year. A close analysis of these terms reveals that they certainly do not apply to all children, and they actually fit the behavior of some adults. Note that these words connote either sentimental or pejorative views of young people, a caricature of a vast and diverse group. Advertisers and politicians frequently use these symbols in order to sell products or their political platform.

But these words are not as benign as they might seem. Similar descriptors have historically been used to define women, people of color, and other minority groups to justify their inferior social status.4 Although most people now realize that one’s race, ethnicity, gender, or religion cannot be used to identify personality traits, we still often view children as sharing a set of stable characteristics. Children are a group easily stereotyped, sentimentalized, and misrepresented.

At the same time, there is a danger in viewing children as a singular group. Experiences of childhood are diverse and changing, yet often our standard for the ideal childhood in America (and adulthood for that matter) is based on white, middle-class, and usually suburban standards. If I’m not careful I can fall into this trap too, since this was my experience of childhood growing up in a Midwestern suburb not too far from where the mythical Cleavers of Leave It to Beaver supposedly lived. Childhood is rooted in social, economic, and political realities and is not a universal experience shared by all people of a certain age from the beginning of time. These realities, like the air we breathe, are often invisible, and thus this experience of childhood might seem normal to those who once lived it.

Certainly, each one of us can think of how children’s experiences are different now than in the past. But they are also different based on the circumstances of the present. For instance, a girl growing up in my old neighborhood today will likely have a very different experience if her family’s economic situation, ethnicity, and immigration status are different from mine. Across town, another girl of the same age who lost a parent and lives in public housing will have yet other experiences, as will the girl from another religious background who lives in a rural area miles away. Like snowflakes, no two experiences of childhood are exactly alike.

But we tend to define children as a unitary group and focus on how they are unlike adults. I know what you might be thinking—children aren’t adults. This is true, but some of the differences are not as clear-cut as we might think. Some


children have significant family responsibilities and can always be counted on to be there for the ones they love. Some adults cannot. Some children are very serious and stressed out, while some adults are not. And we all probably know some adults who are financially dependent on others and anything but emotionally mature. Just as some grown-ups don’t meet the ideal definition of what it means to be an adult, many children don’t necessarily fit the stereotype of the child.

This is why we must strive to understand the varied experiences of childhood and to understand how they define their own reality, rather than simply how different they are from the dominant group. Just as the historical definition of women as less competent than men served to perpetuate male dominance, the social construction of childhood serves adult needs and reinforces adult power rather than best meeting the needs of young people. While young children are dependent upon adults in many ways, we tend to define them only by the qualities they lack rather than the competencies they possess.

David Buckingham, professor of education at the University of London, explains the danger of thinking about children as fragile and focusing only on adult protection. Instead, he argues that we need to work toward preparing children to face the realities of the world around them.5 Protection is an idea difficult to let go of—it sounds so noble and above reproach. To prepare rather than protect empowers children to make their own decisions, armed with the necessary information. As much as some people might hope, shielding children from information in media is practically impossible; Buckingham urges adults to focus on preparing children to become empowered media consumers.

Children who know things adults don’t think they should challenge the notion of innocence and sometimes seem threatening. Knowledge is the antithesis of innocence, often seen as the antithesis of childhood itself. The “knowing” child, author Joe Kincheloe points out, is routinely seen as a threat within horror movies. For example, he describes the 1960s British film Village of the Damned, where children can read adults’ minds. Based on this perceived threat, the parents ultimately decide they must kill their own kids. Jenny Kitzinger notes in her study of abuse that a child who has knowledge about sex is often considered ruined and less a victim than a naive counterpart.6 Withholding knowledge is central to maintaining both the myth of innocence and power over children, which is at the heart of media fears. Media destabilize the myth of innocence and challenge adults’ ability to withhold knowledge from children. This is the real threat popular culture poses; rather than threatening kids themselves, popular culture often challenges adult control.

Our conception of childhood reveals a major contradiction between the value of knowledge and the luxury of innocence. On the other hand, it is often through media that adults confront the reality that children do not necessarily embody innocence as


much as adults might hope. We struggle to maintain the sense that childhood means carefree innocence and blame popular culture for getting in the way. The more closely we examine both media and the way we conceptualize childhood, the better we will understand the fear surrounding this relationship. We see how unclear the boundary between adulthood and childhood really is. Sometimes it is the media that help blur the line of demarcation; other times it is media that expose the ambiguity.

We often perceive childhood innocence as a natural, presocial, and ahistorical state that all children pass through.7 Idealizing childhood as a time of innocence causes us to panic when children know more than some think they should. We place a great deal of blame for this loss of innocence on media, as if innocence were something that would stick around longer without popular culture. As we will see in the next section, “innocence” before the age of electronic media was likely to involve higher child mortality rates and an early introduction to hard work in factories, fields, and mills.

Childhood is constantly shifting and changing, and it becomes defined based on the needs of society. The idea that childhood in the past was composed of carefree days without worry is a conveniently reconstructed version of history. This fantasy allows adults to feel nostalgia for a lost idealized past that never was. Experiences of children have changed, but popular culture is at best a minor player in the story.

What Really Changed Childhood?

There should be no doubt that children’s experiences of childhood change over time. In my own family history (and likely yours too), when we compare generations the differences become clear. I have a grandfather whose education ended in the eighth grade so he could work full-time in the family business, something not unusual for his peers during the 1920s. Of course, if my parents took me out of eighth grade to work in the 1980s, they would have been in big trouble. This isn’t because people in the 1920s didn’t care about children, but the needs in many families were different at that time, and child labor wasn’t as restricted. My grandfather was the seventh of eight children and lost his father in World War I, as did many children of his generation. Many like him were needed to contribute to their families to ensure basic survival.

By the time I came around, much had changed, both in my family and within American society as a whole. The country had gone through a period of tremendous economic growth, making children’s labor unnecessary. The passage of child labor laws and compulsory education laws made school attendance mandatory. And most important, the postindustrial, information-based economy created the need for a highly educated workforce. A lack of high school (and increasingly college) education would put economic survival in jeopardy for people of my generation.


By contrast, my grandfather learned his family trade and eventually had his own business in the garment industry, something that would be more difficult today with the predominance of large retail chains and Internet commerce.

These generational differences had much more to do with economics than culture. Yes, the array of media available was vastly different in my grandfather’s day (and he took pleasure in buying me the stereo he never had), but popular culture did not alter the structural realities of either of our childhood experiences.

Not only have childhood experiences changed significantly over time, but the notion of the ideal childhood has, too. In fact, even the idea that there is a distinct period of the life course called “childhood” is a relatively recent development, according to historian Phillipe Ariès, whose groundbreaking 1962 book, Centuries of Childhood: A Social History of Family Life, claims that childhood did not exist as a separate social category in Western culture before the seventeenth century. Based on his analysis of paintings, Ariès observes that children were painted as miniature adults, mostly wearing the same type of clothing and drawn in adult proportions. Little seemed to separate the social roles between adults and children at that time. Although historians have challenged Ariès on several points, his work clearly demonstrates that childhood was conceptualized very differently in the past than it is today.

Whereas Ariès’s focus was on the children of French aristocrats, historian Karin Calvert describes how colonial American childhood was not regarded as an ideal time of life, as so often it is today.8 She describes how high rates of infant mortality and childhood illness made childhood particularly risky, something to hurry up and survive rather than slow down and savor (or worry it is over too fast). Childhood itself became associated with illness. A colonist entering the New World often met with danger, and growing old was a form of conquest.

Unlike today, when popular culture reveres all things youthful, maturity was highly regarded and looked forward to as a time of prestige. Think of the nation’s founding fathers and their white powdered wigs and white stockings, which added years to their appearance. Calvert goes on to say that by the early nineteenth century, American independence had changed the conception of childhood from a period of intense protection to one of greater freedom. She contends that coddling fell out of favor: just as overinvolvement of the mother country was seen as restrictive, parents were discouraged from being overprotective of their children. The belief was that children were made strong by a tough upbringing, while coddling only weakened them.

Calvert explains that during the Victorian era, when infant mortality rates began to fall, childhood evolved into a celebration of innocence and virtue. Families of wealth attempted to keep children pure by separating them from adult society, even from their own parents. Governesses and boarding schools attempted to prevent contamination from adults as long as possible. Childhood became an idealized time


of life, reflected in advertisements, which used images of children to connote purity in products like food and soap.9

But the Victorian attempt to keep children away from the adult world was clearly available only to the affluent. For many children, carefree play and ignorant bliss do not mark past or present experiences of childhood. Death was much more likely to be part of childhood in previous centuries, with high rates of infant mortality, childhood illness, and shorter life expectancy. Historian Miriam Formanek-Brunell notes that nineteenth-century children’s doll play often involved mock funerals, reflecting anything but happy-go-lucky childhood experiences.10 It is our recent conception that insists that childhood should mean freedom from knowledge of the darker side of life.

For other families, childhood meant work at far younger ages than we see now in the United States—although children in developing countries frequently work for wages today. In nineteenth-century America, children in rural areas were needed on family farms, and even if they attended school, their labor was still a necessary part of the family economy. Learning a craft might have meant becoming an apprentice at age eight or nine. Children held in slavery were considered chattel and expected to work as well. By twenty-first-century standards, children working for wages may seem inhumane, but for many families it was economically necessary. Households required full-time labor for tasks like cooking, cleaning, and sewing, particularly in the decades before World War I when poor and rural families were unlikely to have electricity. Since an adult was needed to do the work of maintaining the family, it was necessary for nearly 2 million children to work for wages in 1910.11

Working children often experienced a great deal of autonomy, especially those living in cities. As historian David Nasaw describes, city kids selling newspapers or shining shoes sold their goods and services late into the night, as newspapers published evening editions.12 They kept a portion of their earnings for themselves but gave most to their parents, who were often dependent on the extra money their kids brought in. When reformers—mostly affluent white women who favored the idea that children should be protected from city life—attempted to get them into schools, many of these young peddlers resisted. Giving up their freedom and their incomes did not sit well with the kids, or with their parents who relied on their contributions.

Children’s wages were vital sources of income around the turn of the century, particularly for immigrant families, and constructions of the ideal childhood reflected this need. The useful child was regarded as a moral child, mirroring the adage “Idle hands are the devil’s workshop.” Work and responsibility were considered fundamental values for children, which sociologist Viviana A. Zelizer notes date back to the Puritan ethic of hard work and moral righteousness in early colonial America. Work was viewed as good preparation for a productive adult


life, while higher education remained the domain of elites. The industrial-based economy did not require a great deal of academic training from its labor force. Thus, receiving only an eighth-grade education, as my grandfather did, was not nearly as problematic in the first decades of the twentieth century as it is now.

Zelizer concludes that child labor “lost its good reputation” because children’s labor became less necessary due to rising adult incomes and the growing need for a more educated labor force.13 Compulsory education became more widespread in the early twentieth century, not just because it was more humane for children to be in school rather than factories, but because it became more economically necessary. The growth of automation reduced the need for children in the labor force, and the increasing enrollments in public schools stemmed from a desire to create a separate institution to keep children busy during the day in the interest of public safety, as the large number of immigrant children led to concerns about juvenile delinquency. Fearing that poor immigrants constituted a criminal class, reformers instituted compulsory education, a way to legally enforce social control of this group.14 Schools provided a way to Americanize children, keep them out of the labor force until needed, and remove them from the streets.

This is a defining moment in the history of American childhood: from this point on, adults’ and children’s lives became increasingly divided. Children and adults went from sharing tasks on family farms or the shop floor before the 1930s to increasingly spending more time isolated from one another and creating distinct cultures.

The Creation of Childhood as We Know It

In a way, childhood as we think of it today is rooted in the fallout of the Great Depression years of the 1930s. Historian Grace Palladino contends that the separation between adults and children intensified during the Depression, when adolescents were far more likely to attend high school than in years past due to the shrinking labor market. Children were all but expelled from the workforce. Whereas only about 17 percent of all seventeen-year-olds graduated from high school in 1920, by 1935 the percentage had risen to 42 percent.15 It is during this time that some of the early concerns about young people and popular culture began, too.

The shared space of high school led to the creation and growth of youth culture. Young people’s tastes in music, for example, grew to bear more resemblance to their peers’ than their parents’. Palladino cites swing music as a major cultural wedge between parents and youth in the late 1930s. Parents complained that young people wasted their time listening to the music and were not as industrious as prior generations, a reflection of children’s exclusion from the labor force and increase


in leisure time. This was particularly true following World War II, when economic prosperity coupled with mass marketing created even more distinction between what it meant to be a child, a teenager, and an adult.

The postwar economic boom fueled a consumption-based economy. Following strict rationing of goods during World War II, consumption and the widespread availability of goods expanded dramatically. The amount of consumer goods available to both adults and children exploded, and it became patriotic to spend instead of conserve. Families could also carry more debt with the introduction of credit cards, and home mortgages required much smaller down payments than in prewar days. Increases in wages and automation of household labor provided children with even more leisure time; this prosperity helped to create the new category called “teenager.”

Free from contributing to the family income, this young person had both more time and more money than his or her parents had a generation earlier. Producers created movies, television, and music with this large demographic group in mind, particularly as baby-boom children reached spending age in the late 1950s. But perhaps most centrally, market researchers recognized children as a distinct demographic group. Palladino details how market-research firms that focused specifically on understanding youth culture emerged during the late 1940s to better sell products to this increasingly important consumer group. The perception of youth as a time for leisurely consumption of popular culture began.

Marketers sold the idea that postwar childhood and adolescence should be fun. Following the struggles of the Depression and World War II, children born during the baby-boom years were seen as symbols of a bright, new future. Childhood illnesses like polio were gradually conquered, and basic survival was no longer most parents’ major concern. Instead, happiness and psychological well-being, luxuries of prosperity, became central.

Rather than simply being a time of physical vulnerability, as in the colonial period, or moral vulnerability, as in the Victorian era, postwar childhood came to be defined as a psychologically vulnerable time. Following the popularity of Freud in the United States, parents not only were expected to produce healthy and productive children but were also charged with the responsibility of ensuring their psychological well-being. From a Freudian perspective, the adult personality is formed through childhood conflicts. If these conflicts go unresolved, then neurosis or psychosis is likely to follow in adulthood, placing the burden of lifelong psychological health mainly on the mother, who, according to Freud, was central in these conflicts. This emphasis on children’s psychological health also supported a rigid gender ideology. Middle-class mothers, herded out of the paid labor force following World War II, held the lion’s share of responsibility to raise happy children, a relatively new mandate that would eventually suggest that parents— especially mothers—worry about their children’s media use.


The midcentury growth of suburbs also influenced the meaning and experience of childhood. Shifts from an agrarian to an industrial-based economy led to the growth of cities in the late nineteenth and early twentieth centuries, and following World War II the expansion of American suburbs altered both the experiences and the conceptions of childhood. With suburban life came the growing dependence on automobiles, often creating less mobility for young children dependent on parents for transportation and more mobility for teens who had access to cars. The car culture symbolized American independence: advertisements boasted of the adventures a car could offer on newly constructed superhighways.

Teenagers could also congregate away from parental supervision, listen to music, and visit drive-in movies on their own; in many ways the widespread availability of the automobile altered teen sexuality. Teens, now often free from the need to work to help their families, experienced less adult control, creating parental anxiety about their children’s access to the world around them.

Cultural scholar Henry Jenkins notes that political discourse increasingly described families as individual “forts,” or separate units striving to shield their children from the perceived harms of the larger community.16 In this approach to understanding childhood, children are considered to be under siege, while individual family homes and white picket fences serve as bunkers of suburban safety. The perceived outside dangers include not only unknown neighbors, but also popular culture. This view of childhood as being in danger from the outside world and in need of parental protection continues more than fifty years later, in spite of important social changes that have altered the realities of parenting and family life since that time.

Recently, the postwar era has been held up as ideal, a benchmark against which childhood today is often compared. This has more to do with adults thinking back to their own twentieth-century childhood experiences and idyllic midcentury television shows than reality. Although far fewer children lived in single-parent families and divorce was less common than today, this era was itself the product of specific economic, political, and social realities of the time.17 The prosperity after World War II, coupled with the strength of labor unions, meant that many more families could achieve and maintain middle-class status with one wage earner’s income. New homes in brand-new suburbs could be purchased with little money down, thanks largely to the GI Bill, which also made it possible for many returning vets to attend college for the first time in their family’s history. In many ways, the post-war years were golden.

But not for all. We forget about inequality when we romanticize the happy days of the 1950s.

Nostalgia for an allegedly carefree childhood of the past does not take into account the pervasive history of inequality in the United States. Economic prosperity was


not shared by everyone: in 1955 African American families earned only fifty-five cents for every dollar white families earned.18 Those who mourn the loss of childhood innocence in the twenty-first century tend to ignore the struggles faced by many children of color. In previous centuries children born into slavery, for instance, were regarded as individual units of labor and sometimes sold away from their families. Fifty-five percent of African American families, for instance, lived below the poverty line in 1959, and not only were most suburbs economically out of reach, but unfair housing practices kept suburbs white.19 Our collective nostalgia for this mythical version of childhood calls upon memories of Cleaver- like families, when divorce and family discord were unheard of. In reality it was during the 1950s that divorce rates started to climb, and the families of old that we revere existed mostly on television.

As we will see in Chapter 6, the 1950s was not the age of sexual innocence we often believe today. Pregnancy precipitated many marriages in the 1950s, when the median age of marriage for women dipped to its lowest point in the twentieth century, down to twenty in 1950.20 We often think that teenage pregnancy is a relatively new social problem, believed to be exacerbated by sexual content in media, but the reality is that it has been steadily decreasing. In 1950 the pregnancy rate for fifteen- to nineteen-year-olds was 80.6 per 1,000, whereas by 2009 the rate had dropped to an all-time low of 39.1 per 1,000.21 The difference is that pregnant teenagers now are less likely to be married or to be forced into secret adoptions or abortions. Teens also have more choices, including using birth control, having abortions, or keeping their babies without getting married.

What has changed is our perception of teens and sex. Also changed is our idea of what it means to be a teenager: before the mid-twentieth century, people in their teen years often held adult roles and responsibilities, including full-time jobs and parenting. We have redefined the teenage years as more akin to childhood than adulthood, making previously normative behavior unacceptable.

So childhood in the past was not necessarily as innocent as our collective memory incorrectly remembers. Nor was chewing gum or talking out of turn the biggest complaint adults had about children during that time, as a highly publicized but made-up list claimed regarding how benign children’s problems used to be in the good old days.22 People feared changes in youth at that time just as we do today: juvenile delinquency and promiscuity were big concerns even during this hallowed time, something we conveniently forget today.

Perceptions of childhood now reflect adult anxieties about information technology, a shifting economy, a multiethnic population, and an unknown future. Not unlike the Victorian era, childhood innocence today is prized, and we often attempt in vain to remove children from the adult world. Parents are viewed as the guardians of both their children and the meaning of childhood itself. Those who


permit children to cross over into adulthood are demonized, particularly if they are poor or a member of a racial minority group. Many believe that childhood today ends too soon, with popular culture frequently cited as a cause of this “crisis.” Innocence is seen as a birthright destroyed by popular culture or ineffective parents. Yet we often overlook the realities of children’s experiences in both the past and the present that defy the assumption that childhood without electronic media was idyllic.

The Best Time to Be a Child?

Throughout the past three centuries, childhood has gradually expanded, as our economy has enabled most young people to delay entry into the paid labor force.23 We have also prolonged the time between sexual maturity and marriage, particularly as the onset of puberty happens sooner now for girls than in the past.24 It is only within the past century that such a large group of physically mature people has had so few rights and responsibilities and been considered emotionally immature, a luxury of prosperity.

So while we mourn the early demise of childhood, the reality is that for many Americans, childhood and adolescence have never lasted longer. At the beginning of the twentieth century, a large number of young people entered the labor force and took on many adult responsibilities at fourteen and earlier, compared with eighteen, twenty-one, or even later today. Childhood has been extended chronologically and emotionally, filled with meaning it cannot sustain. Contemporary childhood is charged with providing adults with hope for the future and remembrance of an idealized past. It is a complex and contested concept that adults struggle to maintain to offset anxiety about a changing world.

Although the news provides a steady diet of doom-and-gloom reports about young people, on the whole the news is good. High school and college graduation rates are at an all-time high.25 Youth violence has dropped considerably since the 1990s; the number of juveniles arrested for homicide fell 68 percent between 1994 and 2009; juvenile arrests for any violent offense fell nearly 58 percent between 1994 and 2009.26 The teen birthrate fell 37 percent between 1991 and 2009.27 According to the Centers for Disease Control and Prevention (CDC), fewer teens reported being sexually active in 2011 than in 1991, and those who are used condoms more often. Fewer were involved in fistfights or reported carrying guns in 2011 compared with the early 1990s, and young people were much more likely to wear seat belts and avoid riding in a car driven by a drunk driver. The percentage committing or contemplating suicide decreased steadily as well.28

As we will see in Chapter 9, the percentage of high school seniors who report


drinking alcohol has been declining annually, as has drinking to intoxication.29 Rates of both consumption and intoxication are substantially lower than in the 1970s and 1980s, when their parents were likely teens. Likewise, illegal drug use has declined since the 1970s and 1980s.

So in spite of public perception and the fears that the new media technologies are breeding a violent, sex-obsessed, hedonistic, and self-indulgent young generation, young people are mostly more sober, chaste, and well-behaved than their parents were. Additionally, nearly 55 percent of teens volunteer, averaging twenty-nine hours of service each year.30

Certainly, some changes in the experiences of childhood can be attributed to media and technological changes, which young people often spend a lot of time using. For example, cell phones allow kids both greater freedom from and greater contact with parents. Kids can be physically tracked through Global Positioning System software embedded in their phones and called to return home. On the other hand, children can use online social networking to forge relationships with less parental intervention, and their regular mode of communication with friends might be very different from their parents’. Although many adults fear that playing video games or using the Internet will harm children, we forget that they also serve to prepare them to participate in a high-tech economy. Visual literacy has become more important in the past fifteen years, as video games and computers became staples in many homes that could afford them. The children we should be worried about are the ones who don’t have access to these new technologies.

Changes in childhood may be most apparent when we see kids constantly texting, but technology itself cannot single-handedly create change. The often hidden social conditions that alter experiences of childhood were also behind the creation of these new products; changes in the economy produce both the widespread use of new devices and also the specific experiences of childhood. Media technologies are the icons of contemporary society; they represent and reflect what scares us most about the unknown future. We tend to see the most tangible differences and credit them with creating powerful social changes without considering other structural shifts. To understand changes in childhood, we must look further to see more than media.

Childhood has not disappeared. Instead, it is constantly shifting and mutating with the fluctuations in society. The perceived crisis in childhood is derived from the gap between the fantasy of childhood and the reality. We have filled the idea of childhood with our hopes and expectations as well as our fears and anxieties. We want childhood to be everything adulthood is not, but in reality adults and children live in the same social setting and have more experiences in common than adults are often comfortable admitting. Our economic realities are theirs; they suffer when parents lose their jobs, and they feel the effects of political conflicts, too. Although


we would like to keep the realities of terrorism and violence away from them, unfortunately we cannot. For many young people, these are firsthand experiences, not mediated by television, movies, or popular culture at all.

If childhood has changed, it is because the world has changed. Rapid change can be very frightening, even if the changes have many positive outcomes. Social life has been shifting so rapidly in the past few years that yesterday’s technological breakthrough is tomorrow’s dinosaur, obsolete and useless. Changes in family structure and economic realities reduce adult’s ability to control youth. Automated households rarely require young people to perform lengthy chores to ensure the family’s survival, so they are not needed at home as much as they were a few generations ago. And many young people have access to more information now than they did in the past. Yes, this is partially due to media, but it is also a reflection of changing attitudes about sexuality, for example, when open discussion of this topic is much more prevalent than in generations past.

This does not mean that adults should ignore the challenges of childhood—in fact, many of the problems children face are overshadowed by the fear of media. For instance, an up-close look at the roots of problems often blamed on media, like youth violence and teen pregnancy, reveals that poverty, not media, is the common denominator.31 When communications scholar Ellen Seiter studied adult perceptions of media effects on children, she found that the middle class and affluent were the most likely to blame media for harming children and causing social problems.32 Lower-income people have more experience with the reality of problems like violence to know that the media are not a big part of the equation in their struggles to keep their children safe in troubled communities. Yet our continued response is to attempt to focus on the supposed shortcomings of parents and to see popular culture as enemy number one of childhood. Politicians often help us choose to focus on popular culture instead, making it seem like popular culture is more important for children than food stamps and health care.

Ultimately, it is easier to blame media than ourselves for policies that fail to adequately support children. School levies are routinely rejected because we don’t want to pay more taxes or don’t trust the adults who control school budgets. Affordable, quality child care is so difficult to find because as a society we do not monetarily value people who care for children: those who do frequently earn less than minimum wage. It is not media that have changed childhood over the past century; it is our changing economy and the reluctance of the public to create programs that deal with the very real challenges children face.

Why We Blame Media Anyway

In spite of the fact that kids today are actually doing quite well by many measures,


we worry anyway. Concerns about the next generation are anything but new; as I discuss in the next chapter, fearing that the next generation is going downhill is a perennial concern. What is different is that now we have visual manifestations of these fears in the form of all kinds of new media.

In the worrier’s defense, many people aren’t aware that kids aren’t in as much trouble as catchy news reports often suggest. It’s no wonder, then, that we focus on the most visible changes: in the past century one of the biggest transformations has been the growth of electronic media, which by their very nature command our attention. We have seen the development of movies, television, popular music, video games, the Internet, and social networking, each of which has received its share of public criticism.

New technologies elicit fears of the unknown, particularly because they have enabled children’s consumption of popular culture to move beyond adult control. Parents may now feel helpless to control what music their kids listen to, what movies they see, or what websites they visit. Over the past hundred years, media culture has moved from the public sphere (movies) to private (television) to individual (the Internet and social networking), each creating less opportunity for adult monitoring.

This is not to say that media content is unimportant, nor am I suggesting that parents ignore their children’s media use. These are important family decisions, but on a societal level media culture is not the root cause of social problems. Media do matter, but not in the way many of us think they do. Communications scholar John Fiske describes media as providing “a visible and material presence to deep and persistent currents of meaning by which American society and American consciousness shape themselves.”33 Media are not the central cause of social change, but they are ever present and reflect these changes and also bring many social issues to our attention.

Media have become an important American social institution intertwined with government, commerce, family, education, and religion. Communications scholar John Hartley asserts that media culture has replaced the traditional town square or marketplace as the center of social life. He and others argue that it is one of our few links in a large and increasingly segmented society, serving to connect us in times of celebration and crisis in a way nothing else quite can.34 In a sense media have become representative of society itself. The media receive the brunt of the blame for social problems because they have become symbolic of contemporary American society.

Media culture also enables young people to develop separate interests and identities from their parents. The biggest complaints I have heard from parents is that their children like toys, music, movies, or television programs that they consider junk, and therefore must have harmful consequences. This generational—


and perennial—concern reflects adults’ attempt to exercise their power by condemning tastes that differ from their own sensibilities and displace their fears of the future onto popular culture.

When we relentlessly pursue the idea that media damage children, we are saying that children are damaged. Adults have always believed that kids were worse than the generation before, dating back to Socrates in ancient Greece, who complained about children’s materialism, manners, and general disrespect for elders. Blaming the media is much like attempting to swim full force against a powerful riptide: you end up exhausted and frustrated and get nowhere. Understanding what is really happening will allow the swimmer to survive. Likewise, projecting our collective concern about both childhood and society onto media will not take us very far unless we use it as a starting point to better understand structural factors that have a much larger impact on young people’s well-being.

Notes 1. Joel Bakan, “The Kids Are Not All Right,” New York Times, August 21, 2011, welfare.html; Beverly Beckham, “Childhood Lost to Pop Culture,” Boston Globe, November 7, 2005.

2. Jenifer Johnston, “Have We Poisoned Childhood?,” Sunday Herald (Glasgow), September 17, 2006; Hal Niedzviecki, “Can We Save These Kids?,” Globe and Mail (Toronto), June 5, 2004.

3. Both Kansas and Vermont have statutes allowing children as young as ten to be transferred to adult criminal court.

4. For a comparison between children’s and women’s disempowerment, see Barrie Thorne, “Re-visioning Women and Social Change: Where Are the Children?”

5. David Buckingham, After the Death of Childhood: Growing Up in the Age of Electronic Media.

6. Joe Kincheloe, “The New Childhood: Home Alone as a Way of Life”; Jenny Kitzinger, “Who Are You Kidding? Children, Power, and the Struggle Against Sexual Abuse,” 168.

7. Henry Jenkins, “Introduction: Childhood Innocence and Other Myths,” in The Children’s Culture Reader, edited by Jenkins.

8. Karin Calvert, Children in the House: Material Culture of Early Childhood, 1600– 1900.

9. Stephen Kline, “The Making of Children’s Culture,” in The Children’s Culture Reader, edited by Jenkins.

10. Miriam Formanek-Brunell, Made to Play House: Dolls and the Commercialization of American Girlhood, 1830–1930 (New Haven, CT: Yale University Press, 1993).

11. Viviana A. Zelizer, “From Useful to Useless: Moral Conflict over Child Labor,” in The Children’s Culture Reader, edited by Jenkins, 81.

12. David Nasaw, Children of the City: At Work and at Play. 13. Zelizer, “From Useful to Useless,” 84. 14. Anthony Platt, “The Child-Saving Movement and the Origins of the Juvenile Justice


System,” in Juvenile Delinquency: Historical, Theoretical, and Societal Reactions to Youth, edited by Paul M. Sharp and Barry W. Hancock, 2nd ed. (Upper Saddle River, NJ: Prentice-Hall, 1998), 3–17.

15. Grace Palladino, Teenagers: An American History; US National Center for Education Statistics, 1900–1985, 120 Years of Education: A Statistical Portrait (Washington, DC: Digest of Education Statistics, annual).

16. Jenkins, “Introduction,” 4. 17. Judith Stacey, Brave New Families: Stories of Domestic Upheaval in Late-

Twentieth-Century America. 18. US Census Bureau, Statistical Abstract of the United States, Tables P60-200 and

P60-203, in Current Population Reports (Washington, DC: Government Printing Office, 1999).

19. James Heintz, Nancy Folbre, and the Center for Popular Economics, The Ultimate Field Guide to the U.S. Economy (New York: New Press, 2000).

20. US Bureau of the Census, Statistical Abstract of the United States, Current Population Reports, Series P20-537 (Washington, DC: Government Printing Office, annual).

21. National Center for Health Statistics, Natality, Vital Statistics of the United States (1937–), Birth Statistics (1905–1936) (Washington, DC: US Bureau of the Census); Joyce A. Martin et al., “Births: Final Data for 2009,” National Vital Statistics Reports (Hyattsville, MD: National Center for Health Statistics) 60, no. 1 (2011),

22. A fake list of the top-ten biggest problems in schools of the 1990s (robbery, drug abuse, pregnancy) compared with the supposed top-ten problems in 1940 (gum chewing, running in the halls, improper clothing) was widely distributed and treated as real in spite of evidence otherwise. For a discussion, see Mike Males, Framing Youth: Ten Myths About the Next Generation.

23. James E. Côté and Anton L. Allahar, Generation on Hold: Coming of Age in the Late Twentieth Century.

24. Marcia E. Herman-Giddens et al., “Secondary Sexual Characteristics and Menses in Young Girls Seen in Office Practice: A Study from the Pediatric Research in Office Settings Network,” Pediatrics 99 (April 4, 1997): 505–512.

25. Camille L. Ryan and Julie Siebens, Educational Attainment in the United States: 2009, Current Population Reports, 2012 (Washington, DC: US Bureau of the Census),

26. Howard N. Snyder and Melissa Sickmund, Juvenile Offenders and Victims: 2006 National Report (Washington, DC: US Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 2006), 64, (page 65) and; C. Puzzanchera, B. Adams, and W. Kang, “Easy Access to FBI Arrest Statistics, 1994–2009,” 2012, (numbers: homicides in 2009 1,170, homicides in 1994 3,660; all violent arrests in 2009 85,890, in 1994 148,430).

27. Martin et al., “Births: Final Data for 2009.” 28. Department of Health and Human Services, “Trends in the Prevalence of Sexual

Behaviors,” in National Youth Risk Behavior Survey: 1991–2011 (Washington, DC: Centers for Disease Control and Prevention, 2012),

43; Department of Health and Human Services, Youth Risk Behavior Surveillance—United States, 2011 (Washington, DC: Centers for Disease Control and Prevention, 2012),

29. Monitoring the Future Study, “Long-Term Trends in Lifetime Prevalence of Use of Various Drugs for Twelfth Graders” (Ann Arbor: Survey Research Center, University of Michigan, 2012),

30. “Youth Helping America: The Role of Social Institutions in Teen Volunteering,” (Washington, DC: Corporation for National and Community Service, 2005), http://www.polk-

31. For a discussion, see Mike Males, The Scapegoat Generation: America’s War on Adolescents.

32. Ellen Seiter, Television and New Media Audiences, 58–90. 33. John Fiske, Media Matters: Everyday Culture and Political Change, xv. 34. John Hartley, The Politics of Pictures: The Creation of the Public in the Age of

Popular Media. For further discussion, see Daniel Dayan and Elihu Katz, Media Events: The Live Broadcasting of History.



Does Social Networking Kill? Cyberbullying, Homophobia, and Suicide

Is the new digital world fraught with danger? It is easy to understand why many people would be concerned about the uncharted waters we seem to be traversing online. Will Facebook change the nature of friendships? Might texting alter the ability of its users to construct complete sentences? Has the distinction between public and private eroded, thanks to social networking? And will young people post too much online and not consider the consequences of their actions?

These are just a few of the many questions that our digital environment has created. As I discussed in the first chapter, with the advent of any new medium comes anxiety about what kind of changes it will create and the potential harms we might not yet anticipate. Moral panics are likely to emerge when a new form of media emerges and its users are primarily those who seem particularly vulnerable or threatening (or both). New media do create cultural changes too, in this case shifting the way that people communicate and navigate relationships.

Having grown up before the use of social media took off—and in many cases before the widespread use of the Internet—many adults are especially concerned about young people’s use of these new forms of communication. Texts and tweets are harder to monitor than the old-fashioned landline telephone and mail, making it easier for kids to circumvent parental control at times. And perhaps most alarming to parents, these new media may make it more challenging to shield their children from others. The idea that parents can put a wall up between their families and the outside world was never quite a reality, but the new media environment makes this inability abundantly clear.

Perhaps parents’ and critics’ biggest fear is that new media will be harmful to young people, a fear heightened by national news coverage of several tragedies involving young people who committed suicide over the past few years. A common thread appears to place at least some of the blame on “cyberbullies,” who allegedly harassed the victim using social networking sites, taking old-fashioned teasing to a new and very public level, leading critics to ponder and parents to fear the threat of new technology.

With headlines like “Mean Girls: Cyberbullying Blamed for Teen Suicides” (ABC News), “As Bullies Go Digital, Parents Play Catch-Up” (New York Times), and “Death by Cyber-Bully” (Boston Globe), it is easy to understand why concerns about cyberbulllies would rise. CBS News ran a story titled “Phoebe Prince: ‘Suicide by Bullying’; Teen’s Death Angers Town Asking Why Bullies Roam the Halls.” A USA Today column titled “Bullying: Are We Defenseless?” implores


readers to “find a way to save the children.” Not only do parents want to protect their kids from harm at school, but new technology allows meanness to pervade new spaces. The Washington Post reported in 2010 that “the Internet’s alarming potential as a means of tormenting others … raises questions whether young people in the age of Twitter and Facebook can even distinguish public from private.” “It’s just a matter of when the next suicide’s going to hit, when the next attack’s going to hit,” says attorney Parry Aftab in the article, sounding very much like concerns that arose about terrorism after September 11, 2001.1

Of course, children aren’t alone in using the Internet to defame others. The very nature of the Internet allows for uncensored and seemingly anonymous speech, enabling angry, often hateful websites free rein. Visit nearly any website that allows comments, and you will see a range of sometimes abusive language, perhaps none more so than those that are political in nature. Let’s face it: people of all ages can be really rude online.

But cyberbullying involving young people strikes a nerve; most of us have had the experience of being teased, at least mildly, as children, but taunts typically ended after school let out. New media like the Internet and smartphones are extremely difficult to monitor, so it is easy to understand why social networking sites, texting, and other online communication would create concerns. New media reflect a brave new world of sorts, where something as common as a schoolyard taunt takes on new meaning when it happens electronically. Spoken words may fade into the past eventually, but electronic messages never really die.

Not only is there fear that kids will communicate inappropriately with one another, but the Internet also seems to make it easier for strangers to interact with children, creating new concerns about cyber predators. As I wrote in Kids These Days: Facts and Fictions About Today’s Youth, fears of “stranger danger” and kidnapping coincide with children using social networking tools. Stories about kidnappings or sexual assaults highlight the potential dangers adults could pose to young people online. This fear was doubtlessly heightened by NBC’s Dateline series, “To Catch a Predator.” The hidden-camera segments aired from 2004 to 2007, featuring producers posing as young teens online in order to catch adult men who come to a house, presumably to have sex with a minor. Seemingly ordinary men appeared, suggesting that Internet predators could be anyone, anywhere, and are just a click away.

Besides concerns of abuse from peers and predatory adults, the shift into the electronic age has also sparked concerns that the Internet itself is dangerous. Stories of marriages ruined from too much online gaming, shopping, or Facebook friending suggest that the existence of the Internet itself can be detrimental to our health and relationships. Talk of “Internet addiction” as a new form of mental illness also dominates self-help talk shows, despite the fact that it is not currently classified as an illness by the American Psychiatric Association.


Is the Internet putting people at greater risk for suicide, depression, kidnapping, and sexual abuse? While questions like this might be great fodder for cable-news pundits and talk-show hosts, the concern reflects anxieties about new media, not actual increases in the feared behaviors. Stories like one about a Chinese teen who sold his kidney to buy an iPhone and iPad may make us shake our heads about the impact new media has on young people, but the relationship most teens have with new technology is typically more mundane than extreme examples like this one.2

In this chapter, I explore two central fears surrounding new media: first, that cyberbullying can push people to commit suicide, and second, that online predators routinely use the Internet to lure kidnapping or sexual abuse victims or both. By comparing the headlines to data on these problems, we will see that although these new communication technologies have become much bigger parts of many people’s lives, the problems they are often associated with are in fact not getting worse. The stories we hear may be shocking and familiar, and although powerful examples, they are not necessarily representative of a larger trend of increased danger to young people.

“Cyberbullicide”: Familiar Tragedies

You probably have heard many of their names: Tyler, Megan, Amanda, Phoebe, and Jamey, to list a few. These are the names of young people who committed suicide, apparently after enduring online harassment. Their stories became regular features on national news programs and talk shows, seeming to be symbolic of the scary new Internet world we inhabit.

When news of Tyler Clementi’s tragic jump from the George Washington Bridge made headlines in the fall of 2010, it really hit home among students in my classes. Like Clementi, many of my students were eighteen-year-old college freshmen adjusting to being away from home for the first time, and some were dealing with a new roommate they didn’t particularly like.

Clementi was a student at Rutgers University who had apparently requested a roommate change after he had discovered that his roommate, Dharun Ravi, had set up a webcam to watch him become intimate with another man in their room. After Ravi streamed a second encounter live online, Clementi committed suicide. Ravi was charged with invasion of privacy, bias intimidation, and other charges relating to a cover-up. In early 2012 Ravi was found guilty of intimidation, witness tampering, and tampering with evidence. He could have faced up to ten years in prison and deportation to India but was sentenced to thirty days in jail (of which he served twenty) and three years of probation, and he must pay eleven thousand dollars in restitution.3

Ravi appeared to embody the role of cyberbully. His defense attorney attempted


to frame webcam spying as a juvenile prank, stating that “he hasn’t lived long enough to have any experience with homosexuality or gays” and claiming the incident was not a hate crime as charged. News coverage portrayed Ravi as immature, but also cruel and dismissive of the seriousness of the charges, even appearing to fall asleep during closing arguments of his trial.4

Text messages and Twitter entries became evidence introduced at trial, highlighting the break from traditional forms of evidence. News stories translated text-speak for their presumably older readers (idc means “I don’t care,” rents means “parents,” for instance).5 According to an Associated Press report, the roommates checked out each other’s Internet postings before school began. Both wrote negative comments about the other online.6

But perhaps the most central part of this case, beyond the new forms of media it involved, was the issue of homophobia. Were Ravi’s actions meant to embarrass Clementi because he was gay? According to reports, friends denied that Ravi was homophobic, and Ravi did as well in a text to Clementi after the spying incident.7

The case raises questions about the meaning of homophobia and whether cyber- spying constitutes a hate crime. Broken down, the term homophobia translates to fear of homosexuality. This fear can manifest in many forms; it can include violence, harassment, exclusion, or discomfort. Existing on a continuum, people may feel homophobic without being openly hostile toward gay and lesbian individuals. Homophobia is a central part of the concept of hegemonic masculinity, a narrowly constructed idea of what it means to be a “real man.” Rigid definitions of manhood demand heterosexuality, and thus antigay slurs are a prime way that men degrade one another. In fact, homophobia affects men regardless of their sexual orientation, since it is used as both a put-down and a way to enforce strict adherence to hegemonic masculinity.

It’s hard to imagine the Rutgers case getting so much global attention if Clementi had been with a woman in his dorm room. Even before a jury agreed that Ravi’s actions constituted bias, the issue of sexuality was a large part of the case’s coverage. For instance, openly gay talk show host Ellen DeGeneres spoke out publicly about Clementi’s suicide, calling bullying an “epidemic,” and stated that “the death rate is climbing.” Even blogger Perez Hilton, known for often inflammatory online posts about celebrities, reconsidered his approach after Clementi’s death.8

This incident happened at a time when several other stories of young people whom classmates teased about their sexual orientation—or perceived sexual orientation—made national news after they committed suicide. Jamey Rodemeyer, a fourteen-year-old boy from Buffalo, New York, was bullied about his perceived sexual orientation and later committed suicide that same year, garnering coverage on or in NBC’s Today, CNN, the New York Times, the Huffington Post, and other


national news outlets.9 In response to the many highly publicized stories of bullied young people, in

2010 Dan Savage and Terry Miller founded the It Gets Better Project, a website where adults assure young gays and lesbians that they will find acceptance and not to be discouraged by teasing or discrimination they may currently face. President Barack Obama and Secretary of State Hillary Rodham Clinton, as well as other prominent political leaders in the United States and abroad, have participated in the project. Not only can the Internet be used to harass others, but it clearly can also help people who may feel isolated and alone find a sense of community and acceptance.

Social networking and the Internet are relatively new ways of expressing homophobia. A Rutgers instructor, quoted on, claimed, “Intolerance is growing at the same time cyberspace has given every one of us an almost magical ability to invade other people’s lives.”10 Yet it is important to recognize that young people are certainly not alone in perpetuating homophobia, as political leaders often reinforce the idea that it is okay to discriminate based on sexual orientation. A Michigan antibullying law faced opposition from conservative groups that argued laws preventing antigay comments violate free speech and the rights of those to express their religious beliefs. A compromise included a “moral and religious clause” that allows students to tell others they will go to hell due to their sexual orientation, for example. The bill passed in 2011.11

Is intolerance increasing, and is cyberbullying against gay and lesbian young people an epidemic with growing death rates, as reaction to Clementi’s suicide suggested?

Realities of Suicide and Cyberbullying

It appears that lesbian, gay, bisexual, and transgendered teens are more likely to experience cyberbullying than their peers, according to a few recent studies. A 2009 study of just under twenty-five hundred students in a Colorado county found that LGBT youth were more than twice as likely to report “electronic harassment” than those who identified as heterosexual (nearly 30 percent versus 13 percent).12 In a 2010 study of eleven- to eighteen-year-olds, nonheterosexual respondents report greater likelihood of being bullied on- and offline—but also a greater likelihood to admit to bullying others online and off-.13

However, there is also no evidence that LGBT youth are more bullied now than in the past. If anything, it is likely that growing awareness and acceptance of gays and lesbians over the past few decades would stem some of the harassment compared with the past, when teachers and administrators might have been less

likely to intervene. Legal changes after a 1999 US Supreme Court decision also mean that schools can be liable if they do not take reasonable efforts to protect students from sexual harassment.14

Although there is evidence that LGBT youth do experience more harassment than their peers, there is no solid evidence that there is a new epidemic, nor that youth LGBT suicides are significantly higher nationwide. Instead, we had an “epidemic” of tragic cases that became national news stories.

Because death certificates do not include sexual orientation, we just don’t know for sure if suicide rates for LGBT youth are higher on a national scale. Despite this limitation, many people have seen a statistic claiming that 30 percent of all youth suicides involve LGBT individuals. As a 2008 Suicide Prevention Resource Center report explains, this number emerged from a ballpark estimate contained in a 1989 Health and Human Services report rather than an observed trend.15

This 30 percent statistic has become what sociologist Joel Best calls a “mythic statistic,” a statistic that takes on a life of its own, spreading through news reports to become taken for granted as common sense.16 For gay-rights activists, this statistic seems to provide proof of the seriousness of homophobia in American society and creates a sense of urgency to prevent harassment.

It may be that LGBT youth are more likely to commit suicide than their peers; we just don’t have data to know for sure. We do have data from several small studies on suicide attempts and suicidal ideation (thoughts about suicide) that suggest that LGBT individuals are more likely than their peers to attempt and think about suicide. Exactly how much more varies from study to study, and the studies are too isolated to make national generalizations. Because the amount of acceptance of LGBT individuals varies significantly across regions in the country, the social context of any given community likely influences the outcome of these studies, so it would be difficult to generalize from these isolated studies.17

Although we don’t know the sexual orientation of suicide victims nationwide, we do know their ages. One major misconception is that teens are the group most prone to suicide. In fact, they are among the least likely to commit suicide. According to data from the Centers for Disease Control and Prevention (CDC), forty-five- to fifty-four-year-olds were the group most likely to commit suicide in 2009 (the most recent year for which data are available), with 19.3 suicides per 100,000. The age groups with the fewest suicides? Five- to fourteen-year-olds (0.7 per 100,000), followed by fifteen- to twenty-four-year-olds (10.1 per 100,000). Rates for young people have been flat for the past decade, with virtually no changes. But suicide rates have crept up slightly for thirty-five- to sixty-four-year- olds, while declining slightly for those sixty-five and older.18

Ironically, children, teens, and young adults are the least likely to take their own lives but are presumed to be the most at risk. This might be because we routinely


hear that suicide is one of the leading causes of deaths for teens, behind car accidents and homicide. Though that statistic is true, the good news is teens are unlikely to die at all, compared to their older counterparts who are more likely to commit suicide and are also more likely to succumb to heart disease, cancer, and other ailments.19

If anything, we might wonder about a “suicide epidemic” among forty-five- to fifty-four-year-olds, whose rates rose from 13.9 per 100,000 in 1999 to 19.3 per 100,000 in 2009. But concerns for middle- aged Americans’ mental health are rarely expressed in dramatic news stories like the ones about young people who have been cyberbullied.

Has Bullying Gotten Worse?

Reports of bullying have become very widespread in recent years, with cable news devoting hours of coverage to the issue. CNN aired programs like Stop Bullying: Speak Up and Bullying: It Stops Here in 2011, the heightened coverage implying that there is a new crisis.20 But is there?

The Bureau of Justice Statistics publishes a report titled Indicators of Crime and School Safety each year and includes bullying as a measure. With bullying described as being called names, insulted, made fun of, pushed, tripped, spit on, being excluded from activities, or threatened with physical harm, about 28 percent of twelve- to eighteen-year-old students reported any one of these experiences at school in 2009 (the most recent year of data available), a decline from 2007 and the same percentage as 2005.21

Bullying clearly exists on a continuum; being called a name by one classmate one time is a very different experience from being harassed every day by many students, so it is difficult to measure the intensity of bullying from this study. However, only 6 percent report that they were threatened with bodily harm in 2009.22

Cyberbullying seems like a new, more menacing form of bullying, like a mutating virus that is more dangerous than the one from which it originates. Just as bullying can take many forms of varied intensity, so can cyberbullying. A 2007 Pew Research Center publication describes cyberbullying as “a range of annoying and potentially menacing online activities—such as receiving threatening messages; having their private emails or text messages forwarded without consent; having an embarrassing picture posted without permission; or having rumors about them spread online.”23

According to the Indicators of Crime and School Safety report, only 6 percent of students twelve to eighteen reported being cyberbullied. Other studies have


come up with higher estimates; a 2011 nationally representative survey conducted by the Pew Internet and American Life Project found that 8 percent of all twelve- to seventeen-year-olds reported having been bullied online, and 12 percent reported being bullied in person. A 2010 Pew Internet and American Life study also found that young people were far more likely to be bullied at school than online (31 percent versus 13 percent online).24 Both studies suggest it is a small minority of young people who have had this experience. According to the 2011 Pew study, most respondents thought that others were mostly kind online, although twelve- to seventeen-year-olds were less likely to respond this way than adults eighteen and over (69 percent compared with 85 percent).25

Other studies, like a 2007 National Crime Prevention Council study, found that 43 percent of thirteen- to seventeen-year-olds report having been cyberbullied; another study claimed that 72 percent of all students had been cyberbullied. Justin W. Patchin and Sameer Hinduja, authors of Cyberbullying Prevention and Response: Expert Perspectives, reviewed several surveys and found an average of 24 percent overall, the variation largely a result of narrower or wider definitions of cyberbullying.26 The more minor the behavior included in the definition, the larger number of people who are likely to have had the experience. There’s a big difference between having an e-mail or text forwarded without our knowledge once or twice and having hateful taunts or doctored pictures repeatedly posted on Facebook about someone.

Although the creation of a new word seems to indicate a different concept, people who experience cyberbullying often experience bullying offline, and both experiences have a lot in common. A 2010 study of middle school–age youth found that both on- and offline bullying victims and offenders were more likely to have attempted suicide than those not involved in bullying of any kind. The authors of the study note that “it is unlikely that experience with cyberbullying by itself leads to youth suicide. Rather, it tends to exacerbate instability and hopelessness in the minds of adolescents already struggling with stressful life circumstances.”27

That same year, the National Institutes of Health (NIH) reported on a study that found that cyberbully victims had higher rates of depression than victims of traditional bullying and than those who cyberbully, in contrast to traditional face-to- face bullying, where both victim and offender tend to show elevated rates of depression.28 Perhaps those who experience cyberbullying feel even less of a sense of control over their environment, one that now extends into cyberspace.

Although it is problematic to presume that the Internet, social networking, or even cyberbullying alone is a primary cause of suicide, the Internet and new electronic communications create additional complexities in our lives and relationships. Yet it is important to note that suicide rates among young people have not been increasing.


So why is bullying so prevalent in the news today, even described as a crisis, when there is no evidence it is actually getting worse? As I discussed in Chapter 2, what has shifted in recent years is the construction of childhood and adolescence as periods of heightened vulnerability. As parents have fewer children increasingly later in life, there is more focus on protecting children emotionally than in previous generations. Beyond concerns about bullying, so-called helicopter parenting extends well into early adulthood, as many parents seek to care for their kids’ emotional needs even while in college and beyond.29 Colleagues tell me of parents calling to try to get their kids added to closed college courses or complain about a grade their young adult student received on a paper. It is this heightened level of caretaking, rather than actual increases in bullying, that has shifted most over time.

Suicide is far more of a complex behavior than the cyberbullying stories might have us believe at first glance. For instance, girls are more likely to report being cyberbullied, according to a variety of studies, yet males are much more likely to commit suicide.30 And middle-aged adults have the greatest likelihood of committing suicide. Despite the dramatic rise of social networking, the use of texting and of the Internet in general have not produced notable changes in suicide rates for young people.

Adult Cyber Predators

Stories of cyberbullying tend to focus on young people as the primary predators, too immature to exercise good judgment about how to treat others. Headlines like “Cyber Bullies Harass Teen Even After Suicide” (Huffington Post) and “The Untouchable Mean Girls” (Boston Globe) paint a picture suggesting that amoral youth are the core threat to their peers.31

Adults aren’t always so nice to each other, either. According to a 2010 survey, 35 percent of workers reported experiencing some kind of bullying at work, defined as “sabotage by others that prevented work from getting done, verbal abuse, threatening conduct, intimidation, and humiliation.” Nearly two-thirds of bullies are men (62 percent), while more than half of the victims are women (58 percent), suggesting an important gender dynamic in the workplace. The Occupational Health and Safety Administration (OSHA) notes that 2 million Americans report being the victims of workplace violence each year as well.32

Of course, it’s not just young people who use the Internet to harass others. Whereas news reports often portray parents as hapless observers, struggling to understand the twenty-first-century world that their children inhabit, adults can be cruel online as well. For instance, a fifty-one-year-old commodities trader was sentenced to twenty-eight months in jail in 2012 for posting an “execution list” of


dozens of Securities and Exchange Commission officials on his Facebook page. In a 2011 National Science Foundation report, a forty-year-old described being harassed online by a former high school classmate, who sent pornographic messages to his employer. A seventy-seven-year-old singer-songwriter allegedly received thousands of harassing e-mails from his fifty-five-year-old former manager, violating a restraining order that ordered her not to contact him further. Currently, the Arizona legislature has proposed a law to define “annoying” or “offensive” online posts as criminal acts, similar to prank phone calls.33

A 2006 incident was particularly shocking, because it involved an adult bullying a child online and became national news. Megan Meier was thirteen years old when she met a boy online—or so she thought. Through her MySpace page, she corresponded with someone she thought was named Josh for a couple of weeks before he turned on her and allegedly told her, “The world would be a better place without you.” Soon after, Megan committed suicide.

There never was a boy named Josh, though. He was fabricated by the mother of a former friend who lived down the street, Lori Drew—who was forty-seven. Megan had recently changed schools and had made new friends, and Drew allegedly wanted to retaliate against Megan for not continuing the friendship with her daughter and to see if Megan gossiped about her daughter online.

Megan had struggled with depression prior to this, occasionally spoke of suicide, and took antidepressants—something Drew knew about before creating the fake boyfriend.34 Drew was later charged and found guilty of three misdemeanor computer crimes in federal court, but the conviction was later thrown out on appeal.35

Although cases like this one appear to be rare, we are more likely to hear of adults whose fake profiles are meant to lure young people in order to have sexual contact. As in Dateline’s now defunct “To Catch a Predator” series, stories highlighting young people led to danger online still echo across the airwaves. In April 2009 Oprah aired “Alicia’s Story: A Cautionary Tale,” about Alicia Kozakiewicz, who at thirteen met a thirty-eight-year-old man online who abducted, beat, tortured, and raped her in 2002. The show also featured similar stories of young girls lured by predators online later that year.36 Kozakiewicz has used her horrific ordeal to speak out about online predators and is currently active in helping to create new laws to help crack down on abusers.

Although news reports occasionally highlight other stories of young people meeting strangers online and becoming victims of crime, these events are fortunately rare and are not limited to teens. In 2008 a twenty-four-year-old woman was killed when she answered a Craigslist ad for a nanny position. And in 2009 Julissa Brisman, twenty-six, working as a masseuse, was murdered in Boston by the man who became known as the “Craigslist Killer.”37 Countless stories of


online dating gone awry, and the numerous scams perpetrated online serve as reminders that we all should be wary of those we encounter online.

But statistically, those we know offline pose a much greater threat.

Cyberreality: Safer than Ever?

Most of the time, violence has nothing to do with new media or social networking. Since Internet use became widespread in the mid-1990s, violent crime has dropped dramatically in the United States. Between 1991 and 2010, violent crime fell by 47 percent; from 2001 to 2010, the rate declined 13 percent. Over the past two decades, homicides in the United States declined 50 percent.38 Although certainly new media cannot be credited for much if any of these declines, it is a reminder that this is a much safer country than it was in the recent past.

When people are victims of violence, the perpetrator is often someone they know reasonably well. According to the 2010 FBI Uniform Crime Reports, about 44 percent of homicide victims were killed by family or acquaintances; just 12 percent were killed by strangers (44 percent of the offenders were not known). For child victims, 79 percent of perpetrators are their parents.39

Victims of other violent crime likely know the perpetrators as well. The National Crime Victimization Survey, a nationally representative survey of Americans twelve and older, found that in 2010 strangers were the offenders in just 39 percent of incidents (a decline from 44 percent in 2001). Female victims were much more likely than males to know their assailants (64 percent versus 40 percent). In cases of rape or sexual assault, 73 percent of females knew their attackers.40

The percentage is similar for juvenile victims. According to a 2008 Office of Juvenile Justice and Delinquency Prevention Program report, 74 percent of perpetrators were family members or acquaintances; the report also estimates that sexual assaults of children have declined since the 1990s. (NCVS data found that incidents of rape declined by 24 percent nationwide since 2001.)41

Although data are not collected as regularly on young people who run away or are kidnapped, previous studies suggest that about one in five minors who runs away from home has been physically or sexually abused, and nearly as many have substance abuse problems. More than three-quarters of abductions are committed by family members—typically a noncustodial parent—but of those kidnapped by a nonfamily member, more than half are taken by an acquaintance (a neighbor, family friend, or babysitter, for instance).42

Not only are we safer offline today than before the rise of social networking within the past decade, but people are gradually learning to protect their privacy


more online as well. According to a 2007 Pew Internet and American Life study of teens, the vast majority—91 percent—report using social networking only to talk with people they already know. Two-thirds try to make their profile visible only to people they know; nearly a third have been contacted by a stranger online, and most (65 percent) reported that they ignored them. Just 7 percent of all teens who are online reported being scared by an online encounter with a stranger.43

Navigating the Cyber Age

Yes, there are plenty of pitfalls online, and people of all ages are still learning to navigate them. Whether it is writing nasty comments about schoolmates or coworkers on Facebook, sending texts or e-mails we later regret, or posting photos that we wouldn’t want the world to see, many people are still figuring out that although we might feel like we have private space electronically, that is mostly an illusion.

One of the best pieces of advice I received as the electronic age dawned was to send only e-mails, texts, or voice mails or write posts that I wouldn’t mind being introduced as evidence in court. That sounds severe, but electronic communication has a way of taking on a life of its own beyond our control once sent.

Part of the challenge of navigating an online identity is that as users of social networking, we are commodities rather than customers. Companies like Facebook, LinkedIn, and Google use our information for advertisers and have been criticized by privacy advocates for not always being transparent about how they use our information.44 Face-book’s frequent changes often switch users’ privacy options, making it difficult to maintain your desired settings from the past without manually resetting them.

Love or hate social networking, it is here to stay. Online platforms have become central in many people’s lives, not replacing offline contact by any means, but they are integral communication for work and socializing. As laws and etiquette struggle to keep up with ever-evolving technology, it is understandable that young people’s use of social networking tools would be a source of concern. But the danger is not quite as severe as some dramatic news accounts may have us believe.

Concerns about bullying and suicide can be channeled to address the limited access to mental health care that many people experience. Whether victims of bullying online, at school, or at work, many people lack the resources or access to receive needed mental health care. According to the Substance Abuse and Mental Health Services Administration, private health insurance is the most common way people pay for mental health care; those without health insurance have more limited access to mental health services. SAMHSA estimates that the percentage of the population whose need for treatment goes unmet is nearly as high as those who


receive mental health care. Perhaps not surprisingly, the groups that have the highest unmet need tend to be young adults eighteen to twenty-five, those who are unemployed, and those without health insurance.45

There’s no doubt that some people have chosen to use new forms of electronic communication to express hostility and hatred, which we are still learning to navigate individually and legally. Rude comments written on a public bathroom wall can be cleaned or painted over; electronic communication isn’t easy to completely erase.

Yet it’s important to keep in mind that despite these new challenges, young people appear to be managing much better than we might think. In fact, we might be more concerned about people who lack access to these new modes of communication and the implications for them both socially and economically. Tragic examples of young people who were bullied and later committed suicide might frighten us into thinking that a new trend of youth suicide coincides with the rise of social networking. As devastating as these incidents may be, they fortunately remain rare. As we struggle to figure out how to navigate this new and ever- changing media environment, parents often feel anxious about technology their children may use and understand better than they do.

Notes 1. Yunji De Nies, Susan Donaldson James, and Sarah Netter, “Mean Girls:

Cyberbullying Blamed for Teen Suicides,” ABC News, January 28, 2010, cyberbullying/story?id=9685026; Jan Hoffman, “As Bullies Go Digital, Parents Play Catch- Up,” New York Times, December 4, 2010,; John Halligan, “Death by Cyber-Bully,” Boston Globe, August 17, 2005, Kealan Oliver, “Phoebe Prince ‘Suicide by Bullying’; Teen’s Death Angers Town Asking Why Bullies Roam the Halls,” CBS News, February 10, 2010,; Bruce Kluger, “Bullying: Are We Defenseless?,” USA Today, January 25, 2012, A11; Geoff Mulvihill and Samantha Henry, “NJ Student’s Suicide Illustrates Internet Dangers,” Washington Post, October 1, 2010, dyn/content/article/2010/09/30/AR2010093000534.html.

2. “Chinese Teen Sells Kidney to Buy iPhone, iPad,” USA Today, April 7, 2012, kidney/54090470/1.

3. David Ariosto, “Guilty Verdict in Rutgers Webcam Spying Case,” CNN, March 17, 2012, hpt=hp_t1; Ashley Hays, “Prosecutors to Appeal Ex-Rutgers’ Student’s 30-Day Sentencing for Bullying Gay Roommate,” CNN, May 21, 2012,

hpt=hp_t3. 4. “Dharun Ravi Seen Snoozing in Court as Jury Prepares to Begin Deliberations,”

CBS2 New York, March 14, 2012, seen-snoozing-in-court-as-jury-prepares-to-begin-deliberations/.

5. Richard Perez-Pena, “More Complex Picture Emerges in Rutgers Student’s Suicide,” New York Times, August 12, 2011, tyler-clementi-suicide-more-complex-picture-emerges.htm.

6. Geoff Mulvihill, “In Tyler Clementi’s NJ Dorm, Tensions Were High,” Atlanta Journal Constitution, September 8, 2011, http:/ clementis-nj-1163838.html.

7. Mulvihill and Henry, “NJ Student’s Suicide Illustrates Internet Dangers.” 8. “Ellen Speaks out on Rutgers Suicide,” ABC News, October 1, 2010, 11773812; Andrew M. Brown, “If Perez Hilton Stops Bullying Celebrities, His Readers Will Desert Him,” Telegraph (Glasgow), October 15, 2010, bullying-celebrities-his-readers-will-desert-him/.

9. Hemanshu Nigam, “Cyberbullying: What It Is and What to Do About it,” ABC News, October 7, 2011, id=14675883#.T6AaUtmh2So; Elizabeth Held, “27 Percent of College Students Say They Have Been Cyber Bullied,” USA Today, December 9, 2011, they-have-been-cyber-bullied; “Jamey Rodemeyer Still Being Bullied After His Death Say Tim and Tracy Rodemeyer,” Huffington Post, September 27, 2011, death_n_983926.html; Danah Boyd and Alice Marwick, “Bullying as True Drama,” New York Times, September 22, 2011, cyberbullying-rhetoric-misses-the-mark.html.

10. Judy Peet, “Rutgers Student Tyler Clementi’s Suicide Spurs Actions Across U.S.,” New Jersey Real Time News, October 3, 2010,

11. Marilisa Kinney Sachteleben, “Michigan Senate Passes Anti-bullying Law, Despite Objections,” Yahoo News, November 3, 2011, passes-school-anti-bullying-law-despite-162200561.html.

12. Bob Roehr, “Harassment/Suicide Rates Doubled for Gay/Lesbian Students,” Medscape Today News, November 15, 2010,

13. Sameer Hinduja and Justin W. Patchin, “Cyberbullying Research Summary: Bullying, Cyberbullying, and Sexual Orientation,” Cyberbullying Research Center, 2011,, 2.

14. See Davis v. Monroe County Board of Education 526 US 629 (1999). 15. Suicide Prevention Resource Center, Suicide Risk and Prevention for Lesbian,

Gay, Bisexual, and Transgender Youth (Newton, MA: Education Development Center, 2008),

16. Joel Best, Damned Lies and Statistics: Untangling Numbers from Media, Politicians, and Activists, 89–93. See also Benjamin Radford, “Is There a Gay Teen Suicide Epidemic?,” Live Science, October 8, 2010,


Dharun Ravi Seen Snoozing In Court As Jury Prepares To Begin Deliberations
teen-suicide-epidemic.html. 17. Suicide Prevention Resource Center, Suicide Risk and Prevention, 16–17. 18. Centers for Disease Control and Prevention, “Death Rates by Age and Age-

Adjusted Death Rates for the 15 Leading Causes of Death in 2009: United States, 1999– 2009,” Deaths: Final Data for 2009 (National Vital Statistics Report) 60, no. 3 (2012),, table 9, p. 21.

19. Arialdi M. Miniño, “Mortality Among Teenagers 12–19 Years: United States, 1999– 2006,” National Center for Health Statistics, May 2010, no. 37,

20. “Stop Bullying: Speak Up,” CNN, 2011,; “CNN, Facebook, Cartoon Network, and Time Inc. Team Up for Anti-Bullying Efforts,” CNN, October 4, 2011, %E2%80%9Cbullying-it-stops-here%E2%80%9D-to-air-october-9/.

21. Simone Robers et al., “Bullying at School and Cyber-Bullying Anywhere,” Indicators of Crime and School Safety: 2011, National Center for Education Statistics, US Department of Education, 2012,, 44–50.

22. Ibid. 23. Amanda Lenhart, “Mean Teens Online: Forget Sticks and Stones, They’ve Got

Mail,” Pew Internet and American Life Project, June 27, 2007,

24. Amanda Lenhart, “Cyberbullying 2010: What the Research Tells Us,” Pew Internet and American Life Project, May 6, 2010, See slide 16; also, note that in slide 22 just 4 percent said they sent a sexually suggestive picture of themselves.

25. Amanda Lenhart et al., “Teens, Cruelty, and Kindness on Social Networking Sites,” Pew Internet and American Life Project, November 9, 2011, teens.aspx.

26. “Teens and Cyberbullying,” National Crime Prevention Council, February 28, 2007, 2; Justin W. Patchin, “How Many Teens Are Actually Involved in Cyberbullying?,” Cyberbullying Research Center, April 4, 2012, teens-are-actually-involved-in-cyberbullying.html.

27. Sameer Hinduja and Justin W. Patchin, “Cyberbullying Research Summary: Cyberbullying and Suicide,” Cyberbullying Research Center, 2010,, 2 (emphasis in the original).

28. “Depression High Among Youth Victims of School Cyber Bullying, NIH Researchers Report,” National Institutes of Health, September 21, 2010,

29. Larry Gordon, “Keeping Parents’ ‘Helicopters’ Grounded During College,” Los Angeles Times, August 29, 2010, parents-20100829.

30. For suicide rates by gender, see Centers for Disease Control and Prevention,


AC360° Town Hall “Bullying: It Stops Here” airs Oct. 9
“Trends in Suicide Rates Among Persons Ages 10 Years and Older, by Sex, United States, 1991–2006,” September 30, 2009, See also Lenhart, “Mean Teens Online.”

31. “Alexis Pilkington Facebook Horror: Cyber Bullies Harass Teen Even After Suicide,” Huffington Post, May 24, 2010,; Kevin Cullen, “The Untouchable Mean Girls,” Boston Globe, December 28, 2011,

32. “Results of the 2010 and 2007 WBI Workplace Bullying Survey,” Workplace Bullying Institute, 2010, survey/; “Workplace Violence,” OSHA Factsheet, Occupational Safety and Health Administration, 2002, workplace-violence.pdf.

33. “Ex-US Trader Gets 28 Months in Jail for Death Threats,” Thomson Reuters News and Insight, April 9, 2012, US_trader_gets_28_months_in_jail_for_death_threats/; “Defining a Cyberbully,” National Science Foundation, November 8, 2011, cntn_id=121847; Hailey Branson-Potts, “Singer-Songwriter Leonard Cohen Testifies About Harassing Voice-mails,” Los Angeles Times, April 9, 2012, about-harassing-voicemails.html; “Arizona Bill Broadens Online Bullying Laws,” ABC News, April 3, 2012, laws-16064936.

34. Christopher Maag, “A Hoax Turned Fatal Draws Anger but No Charges,” New York Times, November 28, 2007,

35. De Nies, James, and Netter, “Mean Girls.” 36. “Alicia’s Cautionary Tale,” on The Oprah Winfrey Show, April 15, 2009,; “Child Predators on the Internet,” on The Oprah Winfrey Show, June 13, 2009,

37. “Craigslist Killing: Rare, but Not Unique,” CBS News, July 16, 2010,; Sarah Armaghan, Kerry Burke, and Dave Goldiner, “Craigslist Date with Murder for N.Y. Beauty Julissa Brisman, Model and Internet Masseuse Shot in Hotel,” New York Daily News, April 17, 2009, craigslist.

38. Federal Bureau of Investigation, Crime in the United States, by Volume and Rate per 100,000 Inhabitants, 1991–2010, Uniform Crime Reports for the United States, 2011 (Washington, DC: US Department of Justice, 2011), us/cjis/ucr/crime-in-the-u.s/2010/crime-in-the-u.s.-2010/tables/10tbl01.xls.

39. Federal Bureau of Investigation, “Crime in the United States, Expanded Homicide Data,” in Uniform Crime Reports for the United States, 2011 (Washington, DC: US Department of Justice, 2011), u.s/2010/crime-in-the-u.s.-2010/offenses-known-to-law- enforcement/expanded/expandhomicidemain; US Department of Health and Human

Services, Administration for Children and Families, Administration on Children, Youth, and Families, Children’s Bureau, Child Maltreatment, 2010, 2011,

40. Jennifer L. Truman, Criminal Victimization, 2010: National Crime Victimization Survey (Washington, DC: US Department of Justice, 2011),, 9.

41. David Finkelhor, Heather Hammer, and Andrea J. Sedlak, “Sexually Assaulted Children: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Thrownaway Children, Office of Juvenile Justice and Delinquency Prevention, August 2008,; Truman, Criminal Victimization, 2010, 2.

42. Heather Hammer, David Finkelhor, and Andrea J. Sedlak, “Children Abducted by Family Members: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Thrownaway Children, Office of Juvenile Justice and Delinquency Prevention, October 2002,;; David Finkelhor, Heather Hammer, and Andrea J. Sedlak, “Nonfamily Abducted Children: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Throwaway Children, Office of Juvenile Justice and Delinquency Prevention, October 2002,

43. Amanda Lenhart and Mary Madden, “Teens, Privacy, and Online Social Networks,” Pew Internet and American Life Project, April 18, 2007, Summary-of-Findings.aspx.

44. Cecilia Kang, “Google Announces Privacy Changes Across Products; Users Can’t Opt Out,” Washington Post, January 24, 2012, products-users-cant-opt-out/2012/01/24/gIQArgJHOQ_story.html.

45. National Survey on Drug Use and Health, “Source of Payment for Outpatient Mental Health Treatment/Counseling Among Persons Aged 18 or Older Who Received Outpatient Mental Health Treatment in the Past Year, by Age Group: Numbers in Thousands, 2009 and 2010,” in The NSDUH Report (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),



What’s Dumbing Down America Media Zombies or Educational Disparities?

Can you name all of Brad and Angelina’s kids? President John F. Kennedy’s siblings? The sisters in Louisa May Alcott’s Little Women? Jacob’s sons from the Old Testament? My guess is the first question is easiest for most readers coming of age in the twenty-first century, whether we are actually interested in knowing the Jolie-Pitt children’s names or not. After all, you don’t have to try very hard to hear them mentioned in celebrity gossip or fan magazines that feature their pictures. Television, magazines, and the Internet help us much more with the first question than the others. The other questions require us to draw on knowledge of history, literature, and the Bible, information that is not circulating as freely and rapidly as information about contemporary popular culture. I admit that my ability to name any of Jacob’s sons is solely based on memories of the play Joseph and the Amazing Technicolor Dream Coat. Is popular culture turning us into a nation of shallow idiots?

Many critics of popular culture are certain that the answer is yes. Although there are numerous examples of ways popular culture can help us waste time with content that is not exactly intellectually stimulating, the cultural explanation helps us overlook very important structural factors that shape educational disparities. Popular culture does not help us understand the educational experiences of young people who live in communities with overcrowded, dilapidated schools, whose families may have attained little education themselves.

But focusing on popular culture may get more attention than addressing these complicated structural factors. Consider these recent news stories suggesting technology and culture are to blame: “Is Google Making Us Stupid?” (Atlantic), “Does the Internet Make You Dumber?” (Wall Street Journal), “Are Smartphones Making Us Stupid?” (Huffington Post), “Generation Hopeless: Are Computers Making Kids Dumb?” (Associated Press), and last “Is It Just Us, or Are Kids Getting Really Stupid?” (Philadelphia), which argues that the Internet is “rewiring” young people’s minds, and not for the best.1

A Washington Times story called “The Pull of Pop Culture” argues that young people must choose between “the pull of the popular or the push of schooling,” and that kids consistently choose the former, or 50 Cent over Shakespeare. A Chicago Sun-Times story, “Successful Kids Reject Pop Culture’s Message,” notes that being able to graduate from high school is based on kids’ “ability to reject the nonsense they are exposed to in our pop culture.”2 A 2008 book by Emory University English


professor Mark Bauerlein, The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future, reflects this same concern.

Within these stories, popular culture is cast as antithetical to education and knowledge, something that prevents learning. None address the massive budget cuts that many public schools have had to endure, or the dramatic racial and ethnic disparities in high school and college graduation rates. That one’s ZIP code is a central predictor of the quality of education one has access to also gets left out of these attention-grabbing headlines.

Concerns that popular culture makes us dumber predate the Internet age. Communications scholar Neil Postman argues in his 1985 book, Amusing Ourselves to Death, that as the United States shifted from “the magic of writing to the magic of electronics,” public discourse changed from “coherent, serious and rational” to “shriveled and absurd,” thanks largely to television.3 Drawing from Aldous Huxley’s Brave New World, Postman decries what he sees as the rejection of books in favor of a show-business mentality that has pervaded every aspect of public life, from politics and religion to education. He believed that these amusements undermine our capacity to think, encouraging us to move away from the written word—rationality, in his view—toward television and visual media.

Postman got it partly right. This new media world does act as a never-ending shiny object that grabs our attention. It distracts us from knowing too much about the way American society is structured, being too aware of social problems that might seem boring in the face of so much other interesting stuff out there to pay attention to. This keeps us focused on cultural explanations for social issues, rather than the less immediate—and arguably less interesting—structural conditions that shape our education system.

But instead of impeding knowledge and discourse across the board, new media like the Internet have increased public discourse, along with the number of amusements available to distract us. Television news programs now use interactive media to further engage citizens, through live blogs and using sites like YouTube in presidential debates, rather than just enabling people to be passively entertained. In fairness to Postman, who wrote before the Internet age, these developments are still unfolding. But rather than replacing traditional means of informing the public and furthering the flow of knowledge, new media and even popular culture are sometimes used to create new ways to educate.

This chapter considers the complaints that popular culture interferes with education and has created an intellectually lazy population. As we will see, changes in visual media and the increased ability to communicate electronically have altered how people interact and exchange information. Television, texting, and a culture awash in seemingly frivolous gossip may appear to be the causes of educational failure, but the reality is far less entertaining. Problems within


education stem from structural factors bigger than popular culture: lack of resources, inconsistent family and community support, and inequality.

While some school districts have significant dropout and failure problems, Americans are not as dumb as we are often told … at least no more so than we have been in the past. The vast divides of educational attainment and intellectual achievement can be explained not by popular culture, but by the continuing reality of inequality in American society.

A Nation of Television Zombies?

Does television put viewers into a hypnotic trance, injecting ideas into otherwise disengaged minds? During the 1970s, several books suggested that this was in fact the case. Marie Winn’s 1977 book, The Plug-in Drug, described television as a dangerous addiction. Following Winn, in 1978 Jerry Mander’s provocatively titled Four Arguments for the Elimination of Television concurred. According to Mander, television viewers are spaced out, “little more than … vessel[s] of reception” implanted with “images in the unconscious realms of the mind.” Put simply, Mander argues that television viewing produces “no cognition.”4

Television viewing increases with age (television viewing is highest for adults seventy-five and over), yet nearly all of the concerns about television dulling the intellect focused on children and teens.5 According to Nielsen Media Research, children and teens watch much less television than their elders: adults sixty-five and over watched an average of more than forty-seven hours per week in 2009, almost double that of children two to eleven, who averaged just over twenty-five hours. Teens twelve to seventeen watched the least television of any age group, averaging just over twenty-three hours.6 Television viewing has been declining in recent years, particularly among young people and teens, who more often use newer forms of media during their leisure time.7

Both Winn’s and Mander’s books rely upon anecdotal observations yet make important charges about the negative effects television supposedly has on thinking. Some of these claims seem like common sense: television shortens one’s attention span, reduces interest in reading, promotes hyperactivity, impedes language development, and reduces overall school performance. Yet research into these claims reveals that television is not exactly the idiot box its critics suggest.

It might surprise you to learn that one of the programs most heavily criticized in the 1970s was Sesame Street, the educational program many of us grew up watching. Cognitive psychologist Daniel R. Anderson studied claims that preschoolers get transfixed in zombielike fashion while viewing Sesame Street, as well as the contradictory complaint that it contributes to hyperactivity. Studies


where researchers observed three- to five-year-olds watch television found that their attention is anything but fixed: they look away 40 to 60 percent of the time, draw letters with their fingers in the air along with characters, and pay more attention to segments compatible with their current cognitive aptitude level. There was no evidence of hyperactivity after watching, and Sesame Street viewers had larger vocabularies and showed greater readiness for school than other children.8

Anderson and several colleagues conducted a long-term study, following 570 children from preschool into adolescence, to see if a relationship between preschool television viewing and academic performance exists. Their findings cast serious doubt on the speculation that television impedes learning later in life. In contrast to the claims that the nature of television itself dulls intellectual ability, their data repeatedly reveal that content matters: children—especially boys—who watched what they call “informative” programming as preschoolers had higher grade point averages and were likely to read more as teens. These findings counter a well-worn idea that television primes children to expect to be entertained at all times, leading to intellectual laziness and the idea that learning is boring.9

Their study also challenges the idea that television has a “displacement” effect: people spend more time watching television, and thus less time engaged in more rigorous intellectual activities like reading. Anderson and colleagues found that this effect was small, complicated, and observed only in middle- and high-income kids. Children who watched fewer than ten hours a week actually had poorer academic achievement than those who averaged about ten hours of viewing per week, and those who watch a lot more than ten had slightly lower academic achievement than those in the middle. The authors conclude that there is no evidence that television viewing displaces educational activities; instead, it is likely that television viewing replaces other leisure activities, like listening to music, playing video games, and so forth. The authors also found that more television viewing did not necessarily translate into doing less homework.10

The authors list other studies to support their claims, finding that television does not ruin reading skills, lower intelligence quotient (IQ), or otherwise interfere with education. This does not mean that parents should let kids watch as much television as they want and let them do their homework when they feel like it. We should certainly not presume from this study that television is children’s best teacher, but it does not necessarily have the damaging effect critics have suggested.

In fact, the best predictor of student achievement is parents’ level of education. It is likely that this effect is so strong—for better and for worse in some cases—that television cannot compete with the academic environment created by parents. Parents who encourage reading, read themselves, and emphasize the importance of education are a far more powerful source than television. Not surprisingly, reading more is a good predictor of school success, but watching television does not


interfere with literacy skills, as many critics charged.11 This connection means that educational achievement—a good predictor of one’s economic success—is inherited more than we might care to acknowledge.

The critiques of educational television have had political underpinnings in some cases. Anderson describes how much of the concern about Sesame Street was driven by those who sought to cut funding for the Children’s Television Workshop, and public television more generally, during the early 1990s.12 If opponents could find that educational programming had no impact, or even deleterious effects, they could justify eliminating public funding as yet another form of budgetary pork. But such was not the case.

Television has never really left the hot seat. More recently, TV has been blamed for causing attention deficit/hyperactivity disorder (ADHD) and even autism. Although it may seem like television’s electronic images can wreak havoc on the young brain’s wiring process, research does not support this conclusion. It is likely that people who have grown up with electronic media think differently from those who did not, but different is not always pathological.

Let’s look more closely at some of the research on ADHD and television. It is mostly based on correlations, and therefore causality cannot be assessed. But if you Google “television and ADHD” you will be told otherwise. One online article concludes in its headline, “It’s Official: TV Linked to Attention Deficit.”13 But the authors of the study cited by this article would not go that far.

The study in question, published in a 2004 issue of the journal Pediatrics, assessed the “overstimulating” effect television may have on children who watch TV as toddlers. To do so, they asked parents about their children’s television viewing at ages one and three and asked them questions regarding their children’s attentional behavior at age seven. Although they did find a relationship between lower attentional behavior and more television viewing, the authors themselves acknowledge that “we have not in fact studied or found an association between television viewing and clinically diagnosed ADHD,” because none of the children in the study had been diagnosed.14 They also conclude that it is equally likely that a more lax or stressful environment might make television viewing more prevalent in early childhood and that television viewing is associated with, but not the cause of, children’s inattention.

Likewise, a 2006 study published in the Archives of Pediatrics and Adolescent Medicine found significant differences between children diagnosed with ADHD and their peers. The authors found “no effect of subsequent story comprehension in either group,” and that for the non-ADHD children, “children who have difficulty paying attention may favor television and other electronic media to a greater extent than the media environment of children without attention problems.”15

Most interestingly, their study found that any effect that television watching had


on attention was with the non-ADHD kids only; those diagnosed with ADHD showed no declines in attention after watching television. This study challenges the conventional wisdom that television has particularly adverse effects for children with ADHD; instead, the authors conclude that “the cognitive processing deficits associated with ADHD are so strongly rooted in biological predisposition that, among children with this diagnosis, environmental characteristics such as television viewing have a negligible effect on these cognitive processing areas.” A similar study was published in 2007, claiming an association between television viewing and “attention problems,” but did not assess ADHD. Another study did use the protocol for diagnosing ADHD, but again it was unclear whether any participants had actually been diagnosed with the disorder.16

In 2011 a study of four-year-olds watching a fast-paced clip of SpongeBob SquarePants made national news, claiming that children who watched the clip did not perform as well on cognitive tests as the children in control groups who did not see the cartoon segment. To read the news coverage, it seemed as though the undersea cartoon character was uniformly making kids dumb. ABC News headlined its story “Watching SpongeBob SquarePants Makes Kids Slower Thinkers, Study Finds.”17 YouTube videos and blogs boldly stated that “SpongeBob makes kids stupid.”

The study itself, published in the journal Pediatrics, did not go that far. Based on a nonrandom study of sixty four-year-olds from mostly white, affluent families, the experiment involved showing a subsample a fast-paced clip from the cartoon, followed by cognitive tests and a test to measure the ability to delay gratification. The SpongeBob viewers performed worse on all of these tests, but the authors cannot—and did not—claim that this result enabled them to draw any conclusions about the children’s long-term intellectual prospects.

The authors’ conclusion included an interesting hypothesis: that the fantasy nature of the program actually required more of the children cognitively, making it harder for them to perform well on the tests immediately after. They state, “Encoding new events is likely to be particularly depleting of cognitive resources, as orienting responses are repeatedly engaged in response to novel events.”18 So we could just as easily conclude that a fast-paced cartoon requires more mentally and is more of a cognitive workout than slower tasks.

Some critics have even asserted that television is linked with autism, which garnered coverage in a 2006 issue of Time and in the online magazine Slate.19 A study by economists found a correlation between autism rates and cable-television subscription rates in California and Pennsylvania. They did not measure what children watched (or if children were watching at all). Studies like this, although profoundly flawed, help maintain the doomsday specter of television. Easy answers for complex neurological processes are digestible to the public and thus make for


interesting speculation, but probably will yield little in the way of getting to the root cause of autism, just as study after study on television and video games will likely do little for those attending struggling schools.

The cumulative effect of questionable studies helps create an environment where television seems to be the answer for educational failure. The American Academy of Pediatrics insists that parents should not allow children under two to watch any television, for fear that it interferes with development, a claim that has yet to be scientifically supported. The AAP statement does not reference any research on infants, but instead focuses on research on older children and teens. Still, the AAP concludes that “babies and toddlers have a critical need for direct interactions with parents and other significant care givers (e.g., child care providers) for healthy brain growth and the development of appropriate social, emotional, and cognitive skills.”20

Although television does not provide the direct one-on-one interaction babies need and can never replace human interaction, there is no evidence of direct harm from television. A 2003 Kaiser Family Foundation (KFF) report found that the majority of children under two—74 percent—have watched television (or at least their parents admit that they have), and 43 percent watch every day.21

I am not suggesting that propping infants up in front of the TV set is a good idea, especially if children are left unattended (in the KFF report, 88 percent of parents said they were with their children all or most of the time). But there is no evidence that television has a negative impact on infants either, only that it does not necessarily contribute to their development. If parents decide they would like to keep their children away from television, they have the right to make that choice. But many parents are made to feel guilty for choosing to allow some television viewing when there is no concrete evidence of harm. The TV blackout is especially difficult for parents with older children who might watch or those who enjoy watching TV themselves.

In contrast to the widespread belief that television interferes with intelligence, writer Steven Johnson suggests that the opposite might be true. In his book Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter, Johnson argues that television has actually become more complex and cross-referential and that the best dramas and comedies of today require significantly more of viewers than in the past. He cites programs like 24, which expect that viewers think along with the show and draw from plot twists and information from previous shows, in contrast to older television, which provided more exposition, if any was needed at all. He says that these kinds of shows are “cognitive workouts” and that even reality shows sometimes encourage us to develop greater social intelligence.22

Although I’m not sure that television makes most people smarter—I would


hypothesize that those who are already intelligent can use television to improve upon an already strong intellect—the research does not support blaming educational failure on television. It is another attempt to use a cultural explanation while once again ignoring social structure.

Certainly, being able to concentrate and focus is important to educational success. But focusing on popular culture helps us ignore issues such as hunger and family and neighborhood violence that may interfere with learning. These issues are also more likely to be major concerns in low-income areas with high dropout rates.

Minding Newer Media

Although concerns about television will probably never completely fade away, they are sometimes overshadowed now by newer forms of media, particularly time spent online. Adults are more likely to spend time online than children or teens: adults aged thirty-five to forty-four spent an average of nearly thirty-nine hours online in 2008, compared with just over twelve hours for teens twelve to seventeen, according to Nielsen Media Research.23 And video games also cut into television time, especially for boys.

A 2007 study, published in the Archives of Pediatrics and Adolescent Medicine, found that 36 percent of their respondents in a nationally representative sample played video games, averaging an hour a day (and an hour and a half on weekends). Gamers reported spending less time reading and doing homework than nongamers.24 While this may indicate that video gamers’ schoolwork will suffer, other studies—including two that I discussed above—have found no evidence that video games were associated with lower academic performance.

In one of these studies, published in Pediatrics in 2006, the authors seem to contradict themselves. In their analysis they state that “video game use [was] not associated with school performance,” yet conclude that “television, movies and video game use during middle school years is uniformly associated with a detrimental impact on school performance.” They also neglect to add that television use itself has no negative impact, just heavy viewing during the school week, according to their own findings.25

Another researcher responded to this contradiction by writing to the journal that the “conclusions are not warranted,” yet the authors refused to accept their own study’s findings, responding that “from this ‘displacement’ perspective, we have little reason to believe that four hours of video game time would be any different from four hours of television time.”26

The reality is that very few people actually play video games for four hours a day, as the 2007 study found; in the Pediatrics study, 95 percent of kids played


fewer than four hours a day. Their unfounded conclusion that video-game playing must negatively affect academic achievement reflects the persistent belief that video games are problematic; it’s equally likely that children who must spend the same amount of time in other activities, such as caring for siblings or doing extensive household chores, would also find their academic achievement lower. Focusing on video games does not address the broader structural factors that impact school success or failure.

For people who have played video games, the question about gaming and academic achievement might seem backward. Wouldn’t games that require you to learn often complex rules at increasingly difficult levels actually provide intellectual benefits? Steven Johnson, author of Everything Bad Is Good for You, makes this argument, using The Sims as an example, where users need to master a host of rules as they play the game about urban planning. Yes, common sense dictates that people (of all ages) should not neglect their other responsibilities in favor of playing, but the games themselves tend to offer a kind of mental workout, especially improving spatial skills.27

I suspect the disdain for video games and other new media comes from a lack of familiarity. The games are so much more complex now than when they first came out in the 1970s that they compel users to play a lot more than Pong, Merlin, or Atari did when I was growing up. Back then the games were much like other children’s toys that kids played occasionally and mostly grew tired of. By contrast, games today are likely to be serious endeavors that kids don’t give up after a few weeks, but instead are likely to continue to play into adulthood.

Video games bear little resemblance to their predecessors from decades ago, and thus seem like a strange new development for many older adults. But at least some people over forty have a frame of reference for video games, unlike texting, a relatively new development. Recently, texting has come under fire for presumably ruining young people’s ability to spell and write coherently.

Many complaints come from people I can relate to: college professors who read students’ papers and e-mails. A Howard University professor told the Washington Times that electronic communication has “destroyed literacy and how students communicate.” A University of Illinois professor wrote to the New York Times that she is concerned about the informality in written communication, with no regard for spelling and grammar. A tutor wrote an op-ed in the Los Angeles Times of the “linguistic horrors” she frequently reads in students’ essays. “The sentence is dead and buried,” the author concludes.28

I can relate to these concerns, especially when I get rambling e-mails in all lowercase letters from students. But to tell the truth I have not seen a decline in students’ ability to write since e-mail and texting became so widespread. And according to a Pew Internet and American Life study, teens don’t confuse texting


with actual writing. A surprising 93 percent of those surveyed indicated that they did some form of writing for pleasure (journaling, blogging, writing music lyrics, and so on). Most teens—82 percent—also thought that they would benefit from more writing instruction at school. Others are also optimistic. Michael Gerson of the Washington Post writes, “A command of texting seems to indicate a broader facility for language. And these students seem to switch easily between text messaging and standard English.”29

Texting reminds me of another form of language use that is all but obsolete: shorthand. This used to be considered a skill, taught in school often to prepare students for secretarial work. Court reporters also master a language within a language in their daily work. But because texting is associated with young people, critics presume it is a detriment rather than a new skill. And like television, video games, and the Internet, texting is not just a young person’s activity (although the younger a person is, the more texts they are likely to send per day).30 According to industry research, the median age of a texter is thirty-eight.31

Perhaps at the heart of these concerns are uncertainties about these new media. Will they distract people from being productive citizens? Enable too many shortcuts? Much has been written recently about teens and multitasking, mostly with an undercurrent of anxiety. “Some fear that the penchant for flitting from task to task could have serious consequences on young people’s ability to focus and develop analytical skills,” warns a 2007 Washington Post article. Time published an article in 2006 called “The Multitasking Generation,” stating that “the mental habit of dividing one’s attention into many small slices has significant implications for the way young people learn, reason, socialize, do creative work and understand the world. Although such habits may prepare kids for today’s frenzied workplace, many cognitive scientists are positively alarmed by the trend.” The article goes on to quote a neuroscientist who fears that multitaskers “aren’t going to do well in the long run.”32

It is interesting that rather than celebrate the possible positive outcomes of multitasking—which most mothers will tell you they have no choice but to learn— where young people are concerned, the prognosis is grim. As Time observes, multitasking is a valuable professional skill, as any brief observation of the frenzied Wall Street trader or busy executive reveals.

The Kaiser Family Foundation released a report on youth multitasking in 2006 and found that while doing homework, the mostly likely other activity teens engage in is listening to music. Most of the multitasking comes while doing other leisure activities, like instant messaging and Web surfing at once. The KFF study seems to imply that using a computer to do homework invites distraction. “When doing homework on the computer is their primary activity, they’re usually doing something else at the same time (65% of the time),” the report concludes.33 It’s


also the case that people think they are better at multitasking than they actually are. As many other professors have likely also observed, students who spend time online during class lectures and discussions can miss crucial information, though they might think they can do both at once.

Yet computer use is a vital part of being educated in the twenty-first century. In creating access to a tremendous amount of information, the Internet also changes the nature of education. Items that had to be researched from a physical library can be recalled by computer or smartphone, basically eliminating the need for memorization of many facts. These shifts remind me of Albert Einstein’s alleged ignorance of his own phone number, which he supposedly said he could look up if he needed to know. How many phone numbers do you know now that phones remember them for us?

Yes, the Internet and other technologies can be major distractions and have created new ways to take intellectual shortcuts and to cheat. Education needs to evolve along with the technology, shifting the nature of learning away from memorization and onto teaching how to think. The Internet can and has been used to thwart cheating, too, and rather than new media being the enemy, educators need to make peace with them and embrace them as much as possible.

Just as the written word moved societies away from oral culture, visual media require a new intelligence that needs to be fully integrated into education today. Our continued reliance on standardized testing impedes this shift in many ways. But a new way of sharing information has arrived and will likely continue to mutate in the coming years.

How Dumb Are We Really?

For those who glorify the past, the present or future can never compare. What’s interesting is that complaining about how little the next generation knows never abates. People have found young people’s knowledge lacking for centuries, and commentators have grimly assessed Americans’ intellectual abilities, whether it be math, reading skills, or geography, for more than a century.34 The complaint that we are superficial and interested only in amusements has been around for a long time. But are we really less knowledgeable than our predecessors?

One source of support critics look to is SAT (formerly known as the Scholastic Aptitude Test) scores. Between 1967 and 1980, average verbal scores fell 41 points, from 543 to 502, a fall of about 8 percent, and math scores fell 24 points, from 516 to 492. As you can see in Figure 4.1, this appears to suggest that high school aptitude nose-dived during the 1970s. Since that time, average math scores rose to an all-time high in 2005 before falling back to previous levels in the years after. Verbal scores continue to fluctuate but have yet to match levels of the late


1960s and early 1970s.

Figure 4.1: Average Critical Reading and Math SAT Scores, 1967–2011 Source: College Board

Critic Marie Winn, author of The Plug-in Drug, argues that television is the “primary cause” for this decline, claiming that as kids grew up watching more television in the late 1960s, their ability to read declined. But as the above-noted studies detail, television had little to do with high school grade point average, which is highly related to SAT scores.35

Ironically, the decline in SAT scores from four decades ago reflects a positive trend: more high school students are taking the test and planning on attending college than in the past. According to the US Department of Education, in 1972, 59 percent of high school seniors planned on attending college, compared with 79 percent in 2004.36 The number of students enrolled in college more than doubled between 1970 and 2009 as well.37

Not only are more people attending college, many more African American and Latino students are attending college than in 1970, groups that have been historically underrepresented and tend to have slightly lower scores on average than whites or Asian Americans.38 In 2011 more students took the SAT than ever before in history; 44 percent of the test takers were minority students, the largest proportion in history.39 They are also more likely to attend underfunded and overcrowded urban schools with less qualified teachers, and in some cases English is their second language.40

Donald P. Hayes, Loreen T. Wolfer, and Michael F. Wolfe of Cornell University suggest that the decline in the quality of textbooks also help explain declining achievement. They examined eight hundred textbooks published between 1919 and 1991 and found that the newer texts are less comprehensive and, in their estimation, less likely to prepare students to master reading comprehension.41


Still others wonder if verbal abilities are really declining at all. Psychologists have studied scores on intelligence quotient tests from the beginning of the twentieth century, when they were first administered, to 2001 and found that IQ scores are continually rising—so much so that they have had to be periodically recalibrated to reflect the population’s average score. Called the “Flynn Effect” after psychologist James R. Flynn, total unadjusted IQ scores have risen about 18 points between 1947 and 2002. This means the average IQ of someone in 2002— always scaled to 100—would have been about 118 in 1947 (the corollary means that a person of average intelligence in 1947 would have an IQ of 82 in 2002). Four of the points accounted for in the gain are from vocabulary.42

So are we smarter or dumber? Flynn says that “today’s children are far better at solving problems on the spot without a previously learned method for doing so.” He also suggests that if we look at achievement tests of children’s reading from 1971 to 2002, fourth and eighth grade students’ reading skills improved, but by twelfth grade there were no differences over time.43

Looking at the data to which he refers, what is most interesting is that nine-year- old boys in particular gained a great deal on reading scores—15 points between 1971 and 2008, compared with girls’ 10-point gain.44 In all age groups, significant racial and ethnic disparities persist, despite some reduction since 1971. This may partially explain why verbal SAT scores haven’t risen (but not why they fell). In any case, these observations refute the notion that young children can’t read because of television.

The case of IQ and SAT disparities reminds us that these tests are only approximations of intelligence and aptitude, rife with problems of cultural bias, and reflect the narrow ways that aptitude and intelligence are defined. The long-term changes in both measures tell us that people are better prepared for one test, but not for the other … yet they purport to measure some of the same skills.

The National Center for Education Statistics (NCES) conducted assessments of adult literacy in 1992 and 2003 and found that overall results were virtually the same, but there were significant differences in terms of race, education, and age. Whites had higher scores than those in other racial categories, although their scores were virtually unchanged during the two time periods. Blacks and Asian Americans made gains in 2003, while Latino literacy scores declined.

Not surprisingly, having more education correlated with higher scores. Nineteen- to forty-nine-year-olds had the highest scores, with adults over sixty-five having the lowest.45 Overall, people of all ages are reading less than in past decades, according to a 2007 National Endowment for the Arts report. But despite declines in leisure reading, the NEA study found that nearly 60 percent of adults twenty-five to forty-four still read for pleasure. In contrast, a Harris Interactive Poll found that between 1995 and 2004, the percentage of adults who reported


reading as their favorite leisure activity increased, from 28 to 35 percent (although in 2007 it fell to 29 percent); in every year reading was ranked the respondents’ favorite leisure activity. A 2009 NEA study found increases in adults who read literature—with the highest increases among young adults eighteen to twenty- four.46 With the increasing popularity of iPads, Kindles, and Nooks, e-books may eventually reverse the downward trend.

Declines in reading have many causes and implications. We often think that this is a direct result of other media luring people away from books, but long-term studies have also found that in the past several decades Americans have less leisure time, period. A 2008 Harris Interactive Poll found that respondents had the least amount of leisure time since they began asking the question in 1973.47 Since reading is a more intellectually taxing activity, it may be the first to go after a busy day. I am personally an avid reader, but after a long day at work my eyes and brain don’t want to work that hard. I suspect that for other adults, who are working increasingly longer hours to make ends meet, this rings true. But we need to avoid viewing the past through rose-colored glasses, where entire families would have sat around reading books together. With high school graduation rates hovering below 25 percent until 1940, it is very likely that the number of people reading books was not as high as we might think.

Whereas pleasure reading might not be increasing, educational attainment has risen dramatically since 1960. According to the US Census, high school graduation rates more than doubled between 1960 and 2010, from just 41 percent of the population to 87 percent. Less than 8 percent of Americans had a college degree in 1960, compared with 30 percent in 2010. Rates for African Americans and Latinos still lag behind whites, but these groups have made tremendous gains during this time as well. African American high school graduation quadrupled, and college graduation increased sixfold. Latino high school graduation rates have nearly doubled since 1970 (the first year data were collected), while college graduation has tripled in that time period.48

Overall, we are a more educated society, one that places a great deal of emphasis on higher learning as a vital skill in our information-based economy. But as continuing disparities in graduation, literacy, and SAT scores detail, race and socioeconomic status remain significant factors. This is due not to different innate abilities, as controversial theories suggest, or only to media use, but to different educational opportunities built into our social structure.

Social Structure and Unequal Education

Nearly sixty years have passed since the landmark Supreme Court ruling Brown v. Board of Education, which voided the “separate but equal” doctrine that had


dominated American education. Yet children today still largely inhabit very separate public school systems: one that is largely effective in fulfilling its mission of providing students with a quality education and one that fails miserably. The latter tends to be the only option for the nation’s poorest children living in cities, helping to perpetuate the cycle of poverty. Focusing on television and other media as a primary source of educational failure enables us to overlook the pervasive nature of inequality and the most important predictor of educational attainment.

This cycle predates television and has nothing to do with popular culture. Its roots are firmly planted in the days of slavery, when many states outlawed teaching slaves how to read. Education was viewed as a major threat to white supremacy, both during and after slavery. After slavery ended, schools for African American children lacked many of the basic resources, and most colleges and universities excluded them entirely.

While many children, like Brown v. Board of Education’s plaintiff, Linda Brown, lived close to “white” schools, residential segregation ensured that many did not. Segregation actually increased after World War II, with the growth of suburbs that were off-limits for blacks and government policies that refused to underwrite loans for whites who lived in neighborhoods with African Americans. This practice, called “redlining,” dictated the amount of risk involved in home loans, limiting who would get funding to live in a particular neighborhood or who could borrow money for home improvements. Until the passage of the Fair Housing Act in 1968, housing discrimination was rampant and legal, which helped to shuffle Americans into predominantly white or minority neighborhoods, as well as severely limit the property values in nonwhite neighborhoods.

Since schools in the United States are typically funded by property tax revenues, those in areas with a lower tax base had less funding for local schools. Less funding means less money to pay teachers well, so those with more experience and training go to districts with a higher tax base. Those teaching low-income kids are more likely to have emergency credentials and lack training in the specific subject they teach. They are more likely to have older and fewer textbooks, which means that students cannot take their books home to study. The school itself is more likely to be overcrowded and in disrepair.49

As if these obstacles were not enough, as I discuss in upcoming chapters, children living in low-income communities are more likely to experience family disruption and neighborhood violence, making it harder to focus on studying. One of the most important factors predicting educational success is having parents who actively support and are involved in their child’s education. Low-income parents who might need to work several jobs, have little education themselves, or in some cases speak minimal English might not be able to help their children as much as they might hope, despite their best intentions.

Among the best predictors of high educational attainment is having a parent who


has a high level of educational attainment—and thus the cycle unfortunately continues. Children who grow up with educated parents, who leverage their educations to obtain good paying jobs, can afford to live in neighborhoods with higher property values and a better tax base for its schools and provide better preparation for college success. Public schools in affluent areas with insufficient public funding have the ability to raise private funds, so budget cuts and economic downturns affect them less.

These disparities reveal how socioeconomic status and race are deeply intertwined. Although African Americans and Latinos have closed some of the achievement gaps in recent decades, they still persist. Think about the area where you live: Is it mostly segregated? Are there black or Latino neighborhoods that are mostly poor? If you are living near just about any American city, the answer would be yes. These communities developed and persist initially due to public policies that ensured the continuation of racial inequality, even after the demise of slavery and Jim Crow laws and the civil rights movement of the twentieth century.

In the past decade, the federal government attempted to address these disparities through its No Child Left Behind (NCLB) policy. In theory, this program was supposed to assess how well schools worked and provide options for those attending schools that were less effective, including tutoring, after-school programs, or even transferring to another school.50 Critics have argued that the NCLB overemphasizes standardized testing and has not provided sufficient funding to help bolster failing schools. The policy also includes sanctions and penalties for schools that do not meet certain goals, which would further challenge schools in already difficult circumstances. Improving school achievement requires more than fixing failing schools—to significantly reduce the disparities in graduation rates and test scores, we need to also begin to repair the communities that they serve to help break the cycle at all points.

As you can imagine, making changes like this takes time, investment, and commitment, things that we have been mostly unwilling to provide to America’s poorest citizens, particularly during times of budget cuts. Throw in the contentious subject of race and inequality, and suddenly it seems much easier to talk about the problem of television, video games, and computers. But the kids who do not have access to computers are not likely developing the same sort of computer skills as their peers. The digitally disempowered are most likely to be from low-income families and may live in communities with libraries that have no computers, no Internet access, or no public library at all.

According to a 2002 Annie E. Casey Foundation study, having access to a computer at home increases educational performance, even when factors like income are taken into account. Not surprisingly, income is a major factor in determining who is likely to have a computer in their home. In 2009 84 percent of Asian American households had Internet access, as did 79 percent of white


households. By contrast, just 60 percent of black households and 57 percent of Latino families did. Those with adults who had less education were dramatically less likely to have computers at home: 39 percent of those who did not finish high school had a computer, compared with 63 percent of high school graduates, 79 percent of those with an associate’s degree or less, and 90 percent of college graduates. These differences both reflect—and likely reproduce—economic disparities. A 2010 Kaiser Family Foundation report found that among eight- to eighteen-year-olds, whites are still more likely to have computers and Internet access at home than African American or Latino kids. White young people are also more likely to go online at school than African Americans or Latinos. Children with college-educated parents are also more likely to have computers and Internet access at home than children with less educated parents.51

Clearly, low-income families have more pressing needs, like food and rent, before buying a computer or subscribing to an Internet service provider. Even when schools in low-income communities do have computers, they may not be up to date, and the time students can individually spend using them is limited. Over time, this disparity in computer usage translates into less time to do homework assignments on a computer, less ease with computer software, fewer Internet research opportunities, and an overall educational disadvantage. A Duke University study found that there were small but significant differences in students’ math and reading scores related to home computer access.52 Those without computer skills today already face serious employment setbacks, which are bound to multiply.

Common sense tells us that if someone is watching television, playing games, or otherwise avoiding their school, work, or family responsibilities, that is not good. Planting one’s self in front of the TV or computer screen for a long time does have consequences, and this chapter does not suggest otherwise.

But those who argue that television and media are behind some of this country’s serious educational problems are off the mark. For some, the only solution is to never watch television or, as Jerry Mander suggested in 1977, eliminate it altogether. While traditional television is shifting away from live viewing on a dedicated television set, video viewing has expanded to other media platforms, with the explosion of YouTube and video capabilities on smartphones and other devices.

As our communications media shift, intellectual skills shift along with them. Rather than taking the glass-half-empty approach, we might instead look to see what we gain from these changes and how they can enhance education in the future. Beyond popular culture, we must also deal with the stubborn issue of inequality, which is the most important factor in understanding educational disparities—not simply whether someone watched Sesame Street as a toddler. Focusing on popular culture places the entire burden of educational disparities onto individuals or


parents, while completely disregarding the stubborn nature of racial and economic inequality, which is often reflected and reproduced in our educational system.

Notes 1. Nicholas Carr, “Is Google Making Us Stupid?” Atlantic, July/August 2008,; Nicholas Carr, “Does the Internet Make You Dumber?,” Wall Street Journal, June 5, 2010,; David Wygant, “Are Smartphones Making Us Stupid?,” Huffington Post, November 16, 2010, us_b_783750.html; Associated Press, “Generation Hopeless: Are Computers Making Kids Dumb?,” September 30, 2010, helpless-are-computers-making-kids-dumb/; Sandy Hingston, “Is It Just Us, or Are Kids Getting Really Stupid?,” Philadelphia, December 2010,

2. Deborah Simmons, “The Pull of Pop Culture,” Washington Times, January 18, 2008, A17; Mary A. Mitchell, “Successful Kids Reject Pop Culture’s Message,” Chicago Sun-Times, June 7, 2001, 14.

3. Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business, 13, 16.

4. Jerry Mander, Four Arguments for the Elimination of Television, 204. 5. Bureau of Labor Statistics, “Table 11: Time Spent in Leisure and Sports Activities

for the Civilian Population by Selected Characteristics, 2011 Annual Averages,” Economic News Release, June 22, 2012,

6. “Americans Using TV and Internet Together 35% More Than a Year Ago,” Nielsen Wire, March 22, 2010, report-q409/. See also PR Newswire, “Under 35’s Watch Video on Internet and Mobile Phones More Than Over 35’s; Traditional TV Viewing Continues to Grow,” Nielsen Reports TV, Internet, and Mobile Usage Among Americans Press Release, July 8, 2008,–08– 2008/0004844888&EDATE=.

7. Brian Stelter, “Young People Are Watching, but Less Often on TV,” New York Times, February 8, 2012, people-are-watching-but-less-often-on-tv.html?pagewanted=all.

8. Daniel R. Anderson, “Educational Television Is Not an Oxymoron.” 9. Daniel R. Anderson et al., “Early Childhood Television Viewing and Adolescent

Behavior: The Recontact Study.” 10. Ibid., 41. 11. Gary D. Gaddy, “Television’s Impact on High School Achievement.” 12. Anderson, “Educational Television Is Not an Oxymoron.” 13. Jean Lotus, “It’s Official: TV Linked to Attention Deficit,” Post on White Dot, the

International Campaign Against Television Blog, July 21, 2008,

14. Dimitri A. Christakis et al., “Early Television Exposure and Subsequent Attentional Problems in Children,” Pediatrics 113 (2004): 708–713 (quote on 711).


Generation Helpless: Are Computers Making Kids Dumb?–08–2008/0004844888&EDATE=
15. Ignacio David Acevedo-Polakovich et al., “Disentangling the Relation Between Television Viewing and Cognitive Processes in Children with Attention- Deficit/Hyperactivity Disorder and Comparison Children,” Archives of Pediatrics and Adolescent Medicine 160 (2006): 358, 359.

16. Ibid., 359; Carl Erik Landhuis et al., “Does Childhood Television Lead to Attention Problems in Adolescence?,” Pediatrics 120 (2007): 532–537; Edward L. Swing et al., “Television and Video Game Exposure and the Development of Attention Problems,” Pediatrics 126 (2011): 214–221.

17. Courtney Hutchison, “Watching SpongeBob SquarePants Makes Kids Slower Thinkers, Study Finds,” ABC News, September 12, 2011, thinkers-study-finds/story?id=14482447#.T7UyCVKh2Sp.

18. Angeline S. Lillard and Jennifer Peterson, “The Immediate Impact of Different Types of Television on Young Children’s Executive Function,” Pediatrics 124 (2011): e1– e36.

19. Claudia Wallis, “Does Watching TV Cause Autism?,” Time, October 26, 2006,,8599,1548682,00.html; Greg Easter-brook, “TV Really Might Cause Autism,” Slate, October 16, 2006,

20. American Academy of Pediatrics, “Policy Statement,” Pediatrics 104 (1999): 341– 343,;104/2/341.

21. Victoria J. Rideout, Elizabeth A. Vandewater, and Ellen A. Wartella, “Zero to Six: Electronic Media in the Lives of Infants, Toddlers, and Preschoolers,” Henry J. Kaiser Family Foundation, 2003,

22. Steven Johnson, Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter, 14, 96.

23. Katy Bachman, “Study: Teens Would Rather Hit Web, TV Than Read,” Adweek, June 19, 2008, rather-hit-web-tv-read-108836.

24. Hope M. Cummings and Elizabeth A. Vandewater, “Relation of Adolescent Video Game Play to Time Spent in Other Activities,” Archives of Pediatrics and Adolescent Medicine 161 (2007): 684–689.

25. Iman Sharif and James D. Sargent, “Lack of Association Between Video Game Exposure and School Performance: In Reply,” Pediatrics (2007): 1061, 1065.

26. Ibid., 413–414. 27. Johnson, Everything Bad Is Good, 14. 28. Shelley Widhalm, “OMG; How 2 Know Wen 2 Writ N Lingo?,” Washington Times,

January 24, 2008, B1; Letter to the editor, “Email and the Decline of Writing,” New York Times, December 11, 2004, A18; Mary Kolesnikova, “Language That Makes You Say OMG; Teens Are Letting Emoticons and Other Forms of Chat-Speak Slip into Their Essays and Homework,” Los Angeles Times, May 13, 2008,,0,4111689.story.

29. Amanda Lenhart et al., “Writing, Technology, and Teens,” Pew Internet and American Life Project, April 24, 2008,, iv; Michael Gerson, “Don’t Let Texting Get U :-(,” Washington Post, January 24, 2008, A19.

30. Aaron Smith, “Americans and Texting,” Pew Internet and American Life Project, September 19, 2011,

31. CellSigns, industry text-messaging statistics, November 2008,

32. Lori Aratani, “Teens Can Multitask, but at What Costs?,” Washington Post, February 26, 2007, A1, dyn/content/article/2007/02/25/AR2007022501600.html; Claudia Wallis, “The Multitasking Generation,” Time, March 19, 2006,,9171,1174696,00.html.

33. “Media Multitasking Among American Youth: Prevalence, Predictors, and Pairings,” Henry J. Kaiser Family Foundation, December 12, 2006,

34. Karen Sternheimer, Kids These Days: Facts and Fictions About Today’s Youth, 8– 9. See also Joel Best, The Stupidity Epidemic: Worrying About Students, Schools, and America’s Future, 4–8.

35. Marie Winn, The Plug-in Drug: Television, Computers, and Family Life, 286. See the College Board, “Mean SAT Scores by High School GPA: 1997 and 2007,”

36. US Department of Education, National Center for Education Statistics, National Longitudinal Study of the High School Class of 1972; High School and Beyond National Longitudinal Study of 1980 Seniors; National Longitudinal Study of 1988, Second Follow-Up; Student Survey, 1992; Education Longitudinal Study, 2002, First Follow-Up 2004, Note that the study is ongoing, the most recent cohort being the class of 2009, which is being followed through 2012.

37. US Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2010,, chap. 3.

38. US Census Bureau, Educational Attainment by Race and Hispanic Origin: 1960 to 2006, US Census of Population, 1960, 1970, and 1980, vol. 1; Current Population Reports P20-550 and earlier reports,; US Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2006,, chap. 2; US Department of Education, National Center for Education Statistics, Status and Trends in the Education of Racial and Ethnic Minorities, 2006, and

39. “Forty-Three Percent of 2011 College-Bound Seniors Met SAT College and Career Readiness Benchmark,” College Board, September 14, 2011, college-and-career-readiness-benchmark.

40. Sternheimer, Kids These Days, 69–71. 41. Donald P. Hayes, Loreen T. Wolfer, and Michael F. Wolfe, “Schoolbook

Simplification and Its Relation to the Decline in SAT-Verbal Scores,” American Educational Research Journal 33 (1996): 489–508.

42. James R. Flynn, What Is Intelligence?, 8–9. 43. Ibid., 19, 20. 44. US Department of Education, National Center for Education Statistics, Digest of

Education Statistics, 2010 (NCES 2011-015), Table 124,

45. US Department of Education, National Center for Education Statistics, The Condition of Education, 2007 (NCES 2007-064), Table 18-1, and

46. “To Read or Not to Read: A Question of National Consequence,” National Endowment for the Arts, Research Report no. 47, November 2007, p. 7,; Harris Poll, “Reading and TV Watching Still Favorite Activities, but Both Have Seen Drops,” telephone poll of 1,052 American adults aged eighteen and over, conducted October 16–23, 2007,; “Reading on the Rise: A New Chapter in American Literacy,” National Endowment for the Arts, January 2009,

47. Anne H. Gauthier and Timothy Smeeding, “Historical Trends in the Patterns of Time Use of Older Adults,” paper presented at the Conference on Population Ageing in Industrialized Countries: Challenges and Issues, Tokyo, Japan, March 19–21, 2001,; Harris Poll, “Leisure Time Plummets 20% in 2008—Hits New Low,” telephone poll of 1,010 Americans aged eighteen and over, conducted October 16 and 19, 2008, Interactive-Poll-Research-Time-and-Leisure-2008-12.pdf.

48. US Census Bureau, Educational Attainment by Race and Hispanic Origin: 1960 to 2010, US Census of Population, 1960, 1970, and 1980, vol. 1; Current Population Reports and earlier reports,

49. Sternheimer, Kids These Days, 70–71. 50. US Department of Education, Office of the Secretary, Office of Public Affairs, No

Child Left Behind: A Parents [sic] Guide (Washington, DC: Government Printing Office, 2003).

51. Tony Wilhelm, Delia Carmen, and Megan Reynolds, “Connecting Kids to Technology: Challenges and Opportunities,” Annie E. Casey Foundation, June 2002,; US Census Bureau, Reported Internet Usage for Individuals 3 Years and Older, by Selected Characteristics: 2009,; Victoria J. Rideout, Ulla G. Foehr, and Donald F. Roberts, “Generation M2: Media in the Lives of 8- to 18-Year-olds” (Menlo Park, CA: Kaiser Family Foundation, 2010),, 23.

52. Charles T. Clotfelter, Helen F. Ladd, and Jacob L. Vigdor, “Scaling the Digital Divide: Home Computer Technology and Student Achievement,” Harvard University Colloquia, July 29, 2008,



From Screen to Crime Scene Media Violence and Real Violence

In 2011 the US Supreme Court upheld a federal court’s ruling to overturn a 2005 California law that would ban the sale of violent video games to minors. The statute—ironically signed into law by violent-movie veteran and then governor Arnold Schwarzenegger—equated violent video games with pornography and argued that video game violence incited actual youth violence.

Writing for the 7–2 majority, Justice Antonin Scalia noted that video games are protected by the First Amendment. He also described how popular culture has been blamed for inciting violence throughout American history; only the “villains” change (from dime novels to movies to comic books to television to music and now to video games).1

Critics from the Left and Right panned the ruling. An op-ed in the conservative Washington Times argued that “the Court took a wrong position in this case because the framers of the Constitution could not envision a world where children as young as 6 or 7 would be able to walk into shops without their parents’ consent and buy virtual weapons they could use to simulate murder.” The liberal magazine the Nation featured an article that argued in favor of protecting minors’ rights to free speech but criticized the ruling as “simply bizarre in dismissing the claimed harmful effects of violent depictions while still insisting on the strictest puritanical view of the dangers of sexual imagery.” The Washington Post editorialized that the decision was “misguided,” insisting that “the diminished threat of government intervention should in no way impede efforts to keep the most violent games out of the hands of children.”2

It should come as no surprise that many people were upset by the Court’s decision. For more than a century, it has become taken for granted as “common sense” that media violence causes actual violence: thousands of news reports and hundreds of studies on the connection have helped the public believe this is a no- brainer. But the reality of violence is far more complex.

In recent years, video games seemed to connect the dots between high-profile school shootings. Immediately after the 2007 shooting at Virginia Tech, critics on cable-news networks blamed video games for the rampage. Whereas it turned out that the VT shooter rarely played video games, the 1999 Columbine High School shooters were allegedly aficionados of Doom, a game where the heavily armed protagonist stops demons from taking over Earth and had used their classmates’ images during their play target practice.


These incidents, combined with dramatic news headlines, have repeatedly told us that media are to blame. For example, “Study Links Violent Video Games to Violent Thought, Action” (Washington Post), “Violent Video Games and Changes in the Brain” (Los Angeles Times), “A Poisonous Pleasure” (St. Louis Post- Dispatch), and “Survey Connects Graphic TV Fare, Child Behavior” (Boston Globe) are a few of the thousands of stories that tell us media are the root cause of our violence problem.3

I confess: I once believed the popular culture explanation myself. I’ve never been a fan of graphic violence in movies or television and like many others thought that there must be some widespread virus that violence in popular culture spreads. Before beginning graduate work in sociology, I studied psychology and read many of the media-violence studies. Students of psychology are taught that the individual is the primary unit of analysis and that something that may be bad for the individual can be multiplied many times over and thus become a social problem. This perspective is complementary to the American focus on individualism, where we tend to view an individual’s behavior as stemming only from personal choices rather than social forces.

But as I began to review the research, I saw that the results were not as compelling as I had hoped or had heard on the news. I eventually realized that my feelings about violent movies were driven more by my personal distaste of media violence than by solid social science. Other scholars, like psychologist Jonathan L. Freedman, challenge the conclusions of this research too. Freedman evaluated every study published in English that explored the media-violence connection and concluded that “the evidence … is weak and inconsistent, with more non- supportive results than supportive results.”4 Later, when I began graduate work in sociology, I developed a clearer understanding of the large-scale patterns and learned about the structural roots of violence.

Choosing to avoid violent popular culture for ourselves and our families is certainly the right decision for many people, based on personal tastes, values, and beliefs. But those who enjoy action movies, music that references violence, or first- person shooter video games are not necessarily a threat to the rest of us. Their interests and engagement with violent media are more complex than a simple cause- effect relationship.

Because Americans spend so much time, energy, and money focusing on violent popular culture, ironically we often fail to better understand violence itself. If violence is really the issue of importance here, we should start by studying violence before studying media.

This chapter critically examines the moral panic that surrounds popular culture and violence by examining how the fear of media violence is a distraction from the more complex structural causes of violence. The many taken-for-granted


assumptions about the relationship between media and violence are profoundly flawed, which I address in the following pages. Despite the increasingly graphic capabilities of video games, violence in the United States has plummeted over the past two decades. Second, when young people do become violent, they are not merely imitating media violence, and other factors can better explain their behavior. Third, the research on media violence is not nearly as conclusive as many of its authors and sensationalized news reports would have us believe. And last, it is important to consider the context of violence to understand how people of all ages make sense of violence in media, their communities, our nation, and the world.

Violence Has Declined as Media Culture Has Expanded

Media culture has expanded exponentially over the past few decades. It’s hard to keep up with the newest gadgets that make popular culture more portable and allow us to be entertained virtually anywhere. Traditional media like television have grown from a handful of channels to hundreds, now accessible through a variety of online platforms. Video game graphics are much more graphic than they were in the early days of Pac-Man and Space Invaders.

Yet as media culture has expanded, we have seen dramatic declines in rates of crime and violence in the United States. Homicide rates are at their lowest levels in nearly five decades; between 1992 and 2010, the homicide rate fell by almost half, from 9.3 homicides per 100,000 Americans annually to 4.8 per 100,000. The rate of victimization for all violent crimes fell by 70 percent between 1993 and 2010.5

Figure 5.1: Homicide Victimization Rates, 1950–2010, per 100,000 Source: FBI Uniform Crime Reports, 1950–2010

Juveniles were no exception. The homicide offending rate for teens fourteen to


seventeen fell by 71 percent between 1993 and 2000 and has been flat ever since. During the ten-year period between 2000 and 2010, arrests of juveniles for violent crimes (like murder, rape, and aggravated assault) declined 22 percent; for adults eighteen and older, the violent arrest rate also declined, but only by 8 percent.6 These numbers just don’t match the panic that popular culture will create a generation of people who take pleasure in hurting others.

It’s also important to keep in mind that adults are far more likely to commit violent crimes than juveniles are, although most of the media-violence arguments focus on young people as potential predators. True, we did see a rise in homicides committed by teens in the late 1980s, but we also saw a rise in homicides committed by adults during that period.7 There is no youth crime wave now; while there was in the late 1980s and early 1990s, it was matched by an adult crime wave. Rates for both violent crime and property crime have fallen significantly in the past twenty years for both juveniles and adults. But most of our attention is placed on youth, especially when violent media are considered a motivating factor. We seldom hear public outcry about what motivates adults to commit crimes, although they are the most likely perpetrators. Eighteen- to twenty-four-year-old adults have been and are now the age group most likely to commit homicide.

Figure 5.2: Homicide Offending Rates, by Age, 1980–2008 Source: Bureau of Justice Statistics

So in the big picture, juvenile violence rates have declined. But are kids becoming killers at earlier ages, lured by gory media they don’t understand but imitate with lethal results? The Federal Bureau of Investigation (FBI) began collecting data on homicide arrests for very young children in 1964, so we can test


this quite easily, especially because very young perpetrators have a good chance of getting caught.

Homicide arrest rates for children ages six to twelve are minuscule: in 2010 there were 7 arrests out of a population of more than 36 million children. By contrast, 1,430 adults aged twenty-five to twenty-nine were arrested for homicide in 2010 (as were 90 people sixty-five or older). Still, 7 kids are 7 too many, until we consider that this was the fewest number of arrests since the FBI began keeping separate numbers for young children in 1964. Overall, the period between 1968 and 1976 featured the highest arrest rates, with the numbers generally plummeting since.8 Young kids are actually less likely to be killers now than in the past.

So why do we seem to think that kids are now more violent than ever? A Berkeley Media Studies Group report found that half of news stories about youth were about violence, and that more than two-thirds of violence stories focused on youth.9 We think kids are committing the lion’s share of violence because they constitute a large proportion of crime news. Chances are good that some, if not all, of those seven incidents made the news and will stick in the viewers’ memory. The reality is that adults commit most crime, but a much smaller percentage of these stories make news. Emotional stories draw our attention far more than statistics, which are often dry and left out completely in news stories that focus on young offenders.

But how do we explain the young people who do commit violence? Can violent media help us here? Broad patterns of violence do not match media use as much as they mirror poverty rates. While most people who are poor do not commit crimes and are not violent, there are large-scale patterns worth noting. Take the city of Los Angeles, where I live, as an example. Here, as in many other cities, violent crime rates are higher in lower-income areas relative to the population. The most dramatic example is demonstrated by homicide patterns.

For example, the Seventy-Seventh Street division (near the flash point of the 1992 civil unrest) reported 13 percent of the city’s homicides in 2010, yet composed 5 percent of the city’s total population. Conversely, the West Los Angeles area (which includes affluent neighborhoods such as Brentwood and Bel Air) reported less than 1 percent of the city’s homicides but accounted for 6 percent of the total population.10 If media culture really was a major cause of violence, wouldn’t the children of the wealthy, who have greater access to the Internet, video games, and other visual media, be at greater risk for becoming violent? The numbers don’t bear out because violence patterns do not match media use.

Violence can be linked with a variety of issues, the most important one being poverty. Criminologist E. Britt Patterson examined dozens of studies of crime and poverty and found that communities with extreme poverty, a sense of bleakness, and neighborhood disorganization and disintegration were most likely to have higher


levels of violence.11 Violence may be an act committed by an individual, but violence is also a sociological, not just an individual phenomenon, one that is related to patterns of persistently high unemployment, limited educational opportunities, and geographic isolation from more stable communities.12

To attribute actual violence to media violence, we would have to believe that violence has its origins mostly in individual psychological functioning and thus that any kid could snap from playing too many video games or watching violent cartoons. Ongoing sociological research has identified other risk factors that are based on environment: substance use, overly authoritarian or lax parenting, delinquent peers, neighborhood violence, and weak ties to one’s family or community. If we are really interested in confronting youth violence, these are the issues that must be addressed first. Media violence is something worth looking at to better understand our cultural fascination with violence, but not as the primary cause of actual violence.

What about the kids who aren’t from poor neighborhoods and who come from supportive environments? When middle-class white youths commit acts of violence, we seem to be at a loss for explanations beyond media violence. These young people often live in safe communities, enjoy many material privileges, and attend well-funded schools. Opportunities are plentiful. What else could it be, if not media?

For starters, incidents in these communities are rare but extremely well publicized. These stories are dramatic and emotional and thus great ratings boosters. Central-city violence doesn’t raise nearly the same attention or public outcry to ban violent media. We seem to come up empty when looking for explanations of why affluent young white boys, for example, would plot to blow up their school.

We rarely look beyond the media for our explanations, but the social contexts are important here, too. Even well-funded suburban schools can become overgrown, impersonal institutions where young people easily fall through the cracks and feel alienated. Sociologists Wayne Wooden and Randy Blazak suggest that the banality and boredom of suburban life can create overarching feelings of meaninglessness within young people, that perhaps they find their parents’ struggles to obtain material wealth empty and are not motivated by the desire for money enough to conform. White juvenile homicide arrest rates rose (along with black juvenile arrest rates) in the late 1980s and peaked in 1994. The number of African American juveniles arrested for homicide has tumbled even more sharply since its peak in the early 1990s, and homicide arrest rates were at their lowest point in a generation.13

There’s been a lot of good news about crime and violence in the United States


over the past two decades that gets lost in fears that media violence is creating violent young people. In reality young people today are far less likely to engage in violence than their parents’ generation.

Violent Youth Are Not Mindless Imitators

When young people do commit crimes or act violently, news reports often compare incidents to popular culture. Didn’t the killer act like he was playing a video game? After the shootings at Columbine and other schools during the 1990s, video games bore the brunt of the blame. In 1999 retired army lieutenant colonel David Grossman published a book, Stop Teaching Our Kids to Kill, claiming video games serve as military-like training that inspires young people to murder. Grossman’s boot camp–instructor authority brought a lot of attention and fed the video game fear. “There’s a generation growing up that the media has cocked and primed for draconian action and a degree of bloodlust that we haven’t seen since the Roman children sat in the Colosseum and cheered as the Christians were killed,” he warned.14 But as we saw in the previous section, crime data show us that kids are not displaying bloodlust, at least not the real, unpixilated kind.

Critics like Grossman argue that video games are even more influential than movies, television, or music because the player is actively participating in the game. This, of course, is what makes video games fun and exciting and sets them apart from other media, where consumers take on more of a spectator role. Critics fear that players of violent games are rewarded for acts of virtual violence, which they believe may translate into learning that violence is acceptable. Straight out of B. F. Skinner, the fear stems from the idea that we learn from rewards, even vicarious rewards. The prevalence of violent video game playing among young boys troubles many for this reason.

Parents will tell you that their kids often play fight in the same style as the characters in cartoons and other characters from popular culture. But as author Gerard Jones points out in Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make-Believe Violence, imitative behavior in play is a way young people may work out pent-up hostility and aggression and feel powerful. Cops and robbers, cowboys and Indians are all modes of play where children, often boys, have acted out violent scenarios without widespread public condemnation. It is different from acting violently, where the intention is to inflict pain.

The idea that children will imitate media violence draws on Albert Bandura’s classic 1963 “Bobo doll” experiment. Bandura and colleagues studied ninety-six children approximately three to six years old (the study doesn’t mention details about the children’s community or economic backgrounds). The children were divided into groups and watched various acts of aggression against a five-foot


inflated Bobo doll. Surprise: when they had their chance, the kids who watched adults hit the doll pummeled it too, especially those who watched the cartoon version of the doll beating. Although taken as proof that children will imitate aggressive models from film and television, this study is riddled with leaps in logic.

The main problem with the Bobo-doll study is fairly obvious: hitting an inanimate object is not necessarily an act of violence, nor is real life something that can be adequately re-created in a laboratory. In fairness, contemporary experiments have been a bit more complex than this one, using physiological measures like blinking and heart rate to measure effects. But the only way to assess a cause-effect relationship with certainty is to conduct an experiment, yet violence is too complex an issue to isolate into independent and dependent variables in a lab.

Imagine designing a study where one group is randomly assigned to live in a neighborhood where dodging drug dealers and gang members is normal. Or where one group is randomly assigned to be verbally and physically abused by an alcoholic parent. What happens in a laboratory is by nature out of context, and real- world application is highly questionable. We do learn about children’s play from this study, but by focusing only on how they might become violent, we lose a valuable part of the data.

So whereas this study is limited because it took place in a controlled laboratory and did not involve actual violence, let’s consider a highly publicized case that on the surface seems to be proof that some kids are copycat killers. In the summer of 1999, a twelve-year-old boy named Lionel Tate beat and killed six-year-old Tiffany Eunick, the daughter of a family friend in Pembroke Pines, Florida. Claiming Lionel was imitating wrestling moves he had seen on television, his defense attorney attempted to prove that Lionel did not know what he was doing when he hurt Tiffany; he subpoenaed famous wrestlers like Hulk Hogan and Dwayne “the Rock” Johnson in hopes that they would perform for the jury to show how their moves are choreographed. Ultimately, they did not testify, but his attorney argued that Lionel should not be held criminally responsible for what he called a tragic accident.

The jury didn’t buy this defense, finding that the severity of the girl’s injuries was inconsistent with the wrestling claim. Nonetheless, the news media ran with the wrestling alibi. Headlines shouted “Wrestle-Slay Boy Faces Life,” “Boy, 14, Gets Life in TV Wrestling Death,” and “Young Killer Wrestles Again in Broward Jail.”15 This case served to reawaken fears that media violence, particularly as seen in wrestling, is dangerous because kids allegedly don’t understand that real violence can cause real injuries. Cases like this one are used to justify claims that kids may imitate media violence without recognizing the real consequences.

Lionel’s defense attorney capitalized on this fear by stating that “Lionel had fallen into the trap so many youngsters fall into.” But many youngsters don’t fall


into this trap, and neither did Lionel. Lionel Tate was not an average twelve-year- old boy; the warning signs were certainly present before that fateful summer evening. Most news reports focused on the alleged wrestling connection without exploring Lionel’s troubled background. He was described by a former teacher as “almost out of control,” prone to acting out, disruptive, and seeking attention. A forensic psychologist who evaluated Lionel in 1999 described him as having “a high potential for violence” and “uncontrolled feelings of anger, resentment and poor impulse control.”16 Neighbors also described his neighborhood as dangerous, with a significant drug trade.

Evidence from the case also belies the claim that Lionel and Tiffany were just playing, particularly the more than thirty-five serious injuries that Tiffany sustained, including a fractured skull and massive internal damage. These injuries were not found to be consistent with play wrestling, as the defense claimed. The prosecutor pointed out that Lionel did not tell investigators he was imitating wrestling moves initially; instead, he said they were playing tag but changed his story to wrestling weeks later. Although his defense attorney claimed Lionel didn’t realize someone could really get hurt while wrestling, Lionel admitted that he knew television wrestling was fake.17

In spite of the fact that Lionel was deemed too naive to know the difference between media violence and real violence, he was tried as an adult and received a sentence of life in prison without parole. Ultimately, Lionel’s new defense team arranged for his sentence to be overturned in 2003, this time saying that Lionel accidentally jumped on Tiffany when running down a staircase. He was released in January 2004 on the condition that he would remain under court supervision for eleven years. On appeal, a judge ruled that Lionel should have been granted a pretrial hearing to determine if he understood the severity of the charges against him. His case provides an example of the ultimate contradiction: if children really don’t know any better than to imitate wrestling, why would we apply adult punishment? Completely lost in the discussion surrounding this case is our repeated failure as a society to treat children like Lionel before violent behavior escalates, to recognize the warning signs before it is too late.

Unfortunately, this was not the end of Lionel Tate’s troubles. Eleven months after his release, Lionel violated his probation when he was found out at two thirty in the morning with a knife, and a judge extended his probation period to fifteen years. In May 2005, Lionel was arrested for robbing a pizza delivery person at gunpoint and in 2006 was sentenced to thirty years in prison for violating his probation.18

The imitation hypothesis suggests that violence in media puts kids like Lionel over the edge, the proverbial straw that breaks the camel’s back, but this enables us to divert our attention from the seriousness of the other risk factors in Lionel’s life. Chances are we would never have heard about Lionel or Tiffany if there was no


wrestling angle to the story. The biggest problem with the imitation hypothesis is that it suggests that we

focus on media instead of the other 99 percent of the pieces of the violence puzzle. When news accounts neglect to provide the full context, it appears as though media violence is the most compelling explanatory factor.

It is certainly likely that young people who are prone to become violent are also drawn toward violent entertainment. For instance, the Columbine shooters probably used video games to practice acting out their rage onto others, but where the will to carry out such extreme levels of violence came from is much more complex. Rather than implanting violent images, video games and other violent forms of popular culture enable people to indulge in dark virtual fantasies, to act out electronically in ways that the vast majority of them would never do in reality.

Here’s what the media-imitation explanation often leaves out: children whose actions parallel media violence come with a host of other more important risk factors. We blame media violence to deflect blame away from adult failings—not simply the failure of parents but our society’s failure to help troubled young people, who unfortunately we often overlook until it is too late.

The Flaws of Media-Effects Research

But what about all the research done on media and violence that tells us there is a connection? Although this is probably one of the most researched issues in social science, the research is not nearly as conclusive as we are told in dramatic news accounts. Headlines like “Survey Connects Graphic TV Fare, Child Behavior” (Boston Globe), “Adolescents’ TV Watching Linked to Violent Behavior” (Los Angeles Times), “Study Links Violent Video Games to Violent Thought, Action” (Washington Post), “Cutting Back on Kids’ TV Use May Reduce Aggressive Acts” (Denver Post), “Doctors Link Kids’ Violence to Media” (Arizona Republic), and “Study Ties Aggression to Violence in Games” (USA Today) are commonplace and help create the idea that the research is conclusive and clear. In fairness, the social science research isn’t readily available (or particularly interesting) for the public to read themselves, nor, I suspect, do most reporters read the studies on which they report. If they did, they would find only a weak connection between violent programming and aggressive behavior at best.19

Many researchers have built their careers on investigating a variety of potentially harmful effects that television, movies, music, video games, and other forms of popular culture might have. Two things are interesting about this body of research: first, it concentrates heavily on children, presuming that effects are strong on children and perhaps unimportant with adults, and second, that researchers almost always test for negative effects of popular culture, with limited interest in


other implications, such as how users make meanings from such forms of media. Even when crime rates drop, as they have in the United States over the past two

decades, these studies don’t investigate whether media could explain positive events. We might want to ask why many researchers are so committed to finding reasons to blame media for social problems and use popular culture as the central variable of analysis—rather than violence itself.

In one study, researchers considered responses to a “hostility questionnaire” or children’s aggressive play as evidence that media violence can lead to real-life violence. But aggression is not the same as violence, although in some cases it may be a precursor to violence. There is a big difference between rough play at recess, being involved in an occasional schoolyard brawl, and becoming a serious violent criminal. Most media-effects studies actually measure aggression, not violence.

Nor is it clear that these effects are anything but immediate. And aggression is not necessarily a pathological condition; we all have aggression that we need to learn to deal with and channel appropriately. Second, several of the studies use correlation statistics as proof of causation. Correlation indicates the existence of relationships but cannot measure cause and effect. Reporters may not recognize this, and some researchers may forget this, misleading readers into believing research is more conclusive than it actually is.

One such study claiming media violence turned children into violent adults ironically made news the week that American troops entered Iraq in the spring of 2003. This study is unique in that it tracked 329 respondents for fifteen years, but it contains several serious shortcomings that prevent us from concluding that television creates violence later in life.20

First, the study measures aggression, not violence. The researchers defined aggression rather broadly, constructing an “aggression composite” that includes such antisocial behavior as having angry thoughts, talking rudely to or about others, and having moving violations on one’s driving record. Violence is a big jump from getting a lot of speeding tickets.

But beyond this composite, the connection between television viewing and physical aggression for males, perhaps the most interesting measure, is relatively weak. Television viewing explains only 3 percent of what led to physical aggression in the men studied.21 Although some subjects did report getting into physical altercations, fewer than 10 of the 329 participants had ever been convicted of a crime, too small a sample to make any predictions about serious violent offenders.

Other long-term studies used correlation analysis to isolate television from other factors to attempt to connect watching television with violence later in life. A 2002 study published in Science considered important issues like childhood neglect, family income, neighborhood violence, parental education, and psychiatric


disorders. They found that these issues are positively correlated to both more television viewing and aggressive behavior.22

The authors concede that no causal connection can be made—it is very likely the factors that lead people to watch more television are the same factors that contribute to aggression and violence. For instance, someone who watches a lot of television may have less parental involvement and less participation in other recreational activities like sports or extracurricular programs at school, or for older teens a job. They may live in communities plagued by violence and spend more of their leisure time indoors. And of course we have no idea what they are watching on television in studies like these, despite the authors’ blanket statement that “violent acts are depicted frequently on television.”

And as with television, media-violence researchers mostly began studying video games with the expectation that playing violent video games causes aggression in children. Articles like “Video Games and Real-Life Aggression” (2001), “Video Games: Benign or Malignant?” (1992), and “Is Mr. Pac-Man Eating Our Children?” (1997) are just a few examples of a flurry of studies that have appeared in professional journals since the 1980s, all assessing that one outcome.23

We might wonder why researchers conduct so many studies on the same issue if the findings really are as conclusive as the authors sometimes suggest. A 2007 review in the journal Aggression and Violent Behavior found a clear case of publication bias, where studies about video games testing for negative effects are far more likely to be published than other possible findings.24 As much as social scientists claim they can be completely objective, even scholars have preconceived beliefs and agendas that color the research questions they ask, the way their studies are designed, and the interpretations that follow.

In fairness, nearly all professional researchers are up front about the shortcomings of their findings and point out that their results are preliminary or that they cannot truly state that popular culture like video games causes violence. But when a journal article hits the news wires and blogs, cautious science tends to fly out the window. Serious problems in conception or method rarely make it into press reports because they complicate the story.

Just as with other media-violence studies, the main problem with many of these video game studies is how they define and measure aggression. For instance, a 1987 study had subjects impose fake money fines on opponents as an indicator of aggression.25 A pretty big stretch, but equally questionable measures are often used to suggest that video game users will become aggressive, and even violent.

A 2000 study by psychologists Craig Anderson and Karen Dill is a case in point. “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and Life” was published in the Journal of Personality and Social Psychology and quickly made international news. Newspapers, magazines, and other professional


journals reported on their study as definitive evidence that video games can increase aggressive behavior. Time concluded that “playing violent video games can contribute to aggressive and violent behavior in real life,” in May 2000.26

There’s just one problem: upon close inspection, the studies the article based its conclusions on are riddled with both conceptual and methodological problems. Let’s take a closer look to better understand why.

The Anderson and Dill results are based on two studies done with their introductory psychology students, so the sample is not representative. Part of their study looks at whether past video game use is associated with delinquency, but the most seriously delinquent youth rarely make it to college, let alone show up for an appointment to participate in a study for their psychology class. Further, their first study used nearly twice as many female students as males. But males are more likely to play video games and are much more likely to commit serious acts of violence.

In the first study, the students completed a questionnaire that asked about their favorite video games as teens, how violent they thought the games were, how much time they spent playing, and then their history of aggression and delinquency. Students were asked to think back and recall information from four to ten years prior, depending on their age. From this survey, researchers claimed they found a correlation between time spent playing video games and their aggressive or delinquent behavior.

But this study was not designed to assess causality, just the existence of a relationship between time spent playing games and rating higher on irritability and aggression questionnaires.27 Nonetheless, the authors claim that video games “contribute to [the] creation of aggressive personality,” a conclusion that is a clear leap in logic.28 Because correlation measures association, not cause and effect, it is equally possible that those with aggressive personalities are more likely to enjoy aggressive video game playing.

Anderson and Dill conducted a second study in a laboratory; in this experiment, students played a video game for fifteen minutes. Some played a violent game, and others played a nonviolent educational game. When they finished, the students were asked to read “aggressive words” (like murder) on a computer screen and were timed to see how fast they said the words aloud. Because the violent-game players repeated the words faster, they were deemed to have “aggressive thoughts” and perhaps be more prone to violence. Another leap in logic and questionable interpretation, as the words they read on the screen were not indeed their own thoughts, nor are aggressive thoughts necessarily dangerous. It is what we do with our hostility that is important.

The researchers did stumble onto something interesting: even a short time spent playing computer-generated games appears to quicken visual reflexes. Other


studies have supported this finding: a 2005 review published by the National Swedish Public Health Institute found no reliable link with violence, but instead found spatial abilities of players improved.29 While video games strengthen hand- eye coordination and improve reflexes, the claim that video games create the desire to actually kill a live human is not supported by evidence. If this were the case, we would see far more of the millions of video game users become violent instead of an extreme minority.

The Anderson and Dill study also included a follow-up one week later. Students returned to the lab and played another game for fifteen minutes. If they won, they were allowed to blast their opponent with noise (unbeknownst to the subjects, they played against a computer and their opponent wasn’t real). The violent-game players blasted their perceived opponents slightly louder and longer, and this was taken as the indicator of increased aggression caused by video games.

Is making noise really a good proxy for aggression, and is this form of aggression in any way linked with violence? The authors admit in their report that “the existence of a violent video game effect cannot be unequivocally established” from their research. Nonetheless, an Alberta, Canada, newspaper reported that this study is proof that “even small doses of violent video games are harmful to children,” even though children were not the subjects of the study. The story proclaimed that this study “discover[ed] what some parents have always suspected.”30

Time concurred: “None of this should be surprising,” the author stated, listing the violent nature of games like Doom and Mortal Kombat. Even the British medical journal the Lancet reported on this story without critical scrutiny.31 It doesn’t matter how weak a study may be; it can still gather international attention as long as it tells us what we think we already know.

The results of studies that challenge the media-violence connection or seek to find out more than a cause-effect relationship seldom make headlines, but there are plenty of them. Psychologist Guy Cumberbatch found that children may become frustrated by their failure to win at video games, as most games are designed to be increasingly difficult, but this anger does not necessarily translate to the outside world. Cumberbatch concluded, “We may be appalled by something and think it’s disgusting, but they know its conventions and see humor in things that others wouldn’t.” In 1995 psychologist Derek Scott concluded that “one should not overgeneralize the negative side of computer games playing” after his study found no evidence that violent video games led to more aggression.32

Beyond individual studies, reviews of research appear regularly in scholarly journals, and their findings are often contradictory. Although a 1998 review in the journal Aggressive and Violent Behavior declared that a “preponderance of evidence” suggests video games lead to aggression, a review the next year in the


same journal argued that methodological problems and a lack of conclusive evidence do not enable us to conclude that video games lead to aggression. In 2004 the same journal published another review, which noted that “there is little evidence in favor of focusing on media violence as a means of remedying our violent crime problem.” A 2001 review in Psychological Science concluded that video games “will increase aggressive behavior,” while another 2001 analysis in the Journal of Adolescent Health declared that it is “not possible to determine whether video game violence affects aggressive behavior.”33

Other studies look for more than just negative effects, seeking to understand how consumers make meanings from media texts. For example, a British study found that children’s definitions of violent television differed by gender, telling us that masculinity claims are made by boys “tough enough” to not be scared by media violence. The genre and context of the story contribute to whether kids consider a program violent. The researchers also found that, like adults, children tend to think media violence is harmful, just not for them—kids younger than them may be affected, they tell researchers.34

A study of children’s emotional responses to horror films found that they did sometimes have nightmares (parents’ biggest concern for their children), but chose to watch scary films so they could conquer their fears and toughen up.35 The study’s author concluded that watching media violence might be a way for children to prepare themselves to face their fears more directly. While parents may hope to prevent their children from ever being scared or having a bad dream, nightmares are normal ways for children (and adults) to deal with fear and anxiety.

British researchers Garry Crawford and Victoria Gosling interviewed video gamers and found that it is a central source of male bonding for players. Computer games let people temporarily adopt different identities and also enjoy a sense of mastery upon improving their performance in the games. Participants playing sports-related games also gain specific knowledge about the sport, which for males in particular can enhance social standing among peers.36

Studies like the ones described above are absent from news reports about media and violence, so we are encouraged to keep thinking about children as potential victims of popular culture. Even though so much research on media violence focuses on children, it is telling that children’s ideas are missing. We also overlook the reality that older people watch more television than children or teens, and the average age of a video game player is now thirty-seven.37

We might conclude that people who express higher levels of aggression and hostility are also more likely to enjoy violent forms of media. But this has not translated into higher levels of violence outside of the laboratory. While interesting, studies claiming to find strong, negative effects of media lack external validity: their findings cannot be applied to explain the crime and violence in American



The Many Meanings of Violence

Although many young people who have committed violence have also consumed violent media, the majority of people who play video games, watch violent movies, or listen to music with violent lyrics never do. As tempting as it may be to infer how other people will interpret violent media content, we can’t predict someone’s behavior simply from the popular culture they consume.

We might agree that some content is shocking and disturbing, as each new, more realistic-looking version of Grand Theft Auto tends to be. But even though a scene from a film or lyric might be offensive to some, there is no way of knowing for certain how all viewers/listeners/players will actually make sense of the content.

The fear of media violence is based on the belief that young people cannot discern fantasy from reality (but don’t have the same concerns about adults) and that this failure will condition kids to regard violence as a rewarding experience. It’s important to note that the inability to distinguish fantasy from reality is a key indicator of psychosis in adults, but many seem to accept this as a natural condition of childhood and even adolescence.

An unpublished study of eight children claimed to have evidence of the fantasy- reality divide splashed across headlines throughout the United States and Canada. “Kids may say they know the difference between real violence and the kind they see on television and video, but new research shows their brains don’t,” announced Montreal’s Gazette.38 This research, conducted by John Murray, a developmental psychologist at Kansas State University, involved MRIs of eight children, ages eight to thirteen. As the kids watched an eighteen-minute fight scene from Rocky IV, their brains showed activity in areas that are commonly activated in response to threats and emotional arousal. This should come as no surprise, since entertainment often elicits emotional responses; if film and television had no emotional payoff, why would people watch?

But the press took this small study as proof of what we already think we know: kids can’t tell the difference between fantasy and reality. A Kansas City Star reporter described this as “a frightening new insight,” and the study’s author stated the children “were treating Rocky IV violence as real violence.” And while Yale psychologist Dorothy Singer warned that the size of the study was too small to draw any solid conclusions, she also said that the study is “very important.”39

The results of most studies this small might be able to get a researcher some grant money for further investigation, but nearly never make the news. But instead, this study was treated as another piece to the puzzle and clearly made headlines because of its dramatic elements: a popular movie, medical technology, and


children viewing violence. In any case, there are big problems with the interpretation offered by the study’s author. First, this study actually discredits the idea of desensitization. The children’s brains clearly showed some sort of emotional reaction to the violence they saw. They were not emotionally deadened, as we are often told to fear. But kids can’t win either way within the media- violence fear, since feeling too little and feeling too much are both interpreted as proof that media violence is harmful to children.

Second, by focusing on children, the study and subsequent reports make it appear as though children’s thoughts are completely different from adults’. Somehow, by virtue of children being children, their brains can know things that they don’t. But in all likelihood adult brains would react in much the same way. Do an MRI on adults while they watch pornography, and their brains will probably show arousal. Does that mean the person would think that he or she just had actual sex? The neurological reaction would probably be extremely similar, if not identical, but we can’t read brain waves and infer meaning. That’s what makes humans human: the ability to create meaning from our experiences. And adults are not the only ones capable of making sense of their lives.

It is a mistake to presume media representations of violence and real violence have the same meaning for all audiences, or that MRIs can measure how we interpret stories. An anvil might fall on a cartoon character, the CSI sleuths investigate a new murder, but the meanings of each are quite different. A great deal of what counts as television violence today comes from the success of franchises such as CSI, Law and Order, and other police investigation shows that promote the power of law enforcement, not crime.

Even if we have become emotionally immune to violence in popular culture, it by no means indicates that when violence really happens, it has no effect. Ironically, studies that assess violence on television do not consider real violence reported on the news. When we hear about real violence, we may feel a little more concerned but still experience minimal emotional reaction; after all, this is a daily feature of news broadcasts, and it would be overwhelming to get upset every time we turn on the news. But when the event is close to home, the violence appears random, or we see the victims as people like us, the event becomes all the more meaningful. Of course, witnessing violence in person has a different meaning than mediated violence.

Ironically, critics of media violence seem to have problems distinguishing between in-person violence and media violence. This is probably because many of them have had little exposure to violence other than through media representations. Thankfully, I include myself in this category. Aside from the popular culture and witnessing a fistfight or two at school, violence has mainly been a vicarious experience for me.

While working as a researcher studying juvenile homicides, I discovered some


of the differences between media violence and actual violence. This study required our research team to comb through police investigation files looking for details about the incidents. Just looking at the files could be difficult, so we tried to avoid crime-scene and coroner’s photographs to avoid becoming emotionally overwhelmed.

One morning while I was looking through a case file, the book accidentally fell open to the page with the crime-scene photos. I saw a young man, probably about my age at the time, slumped over the steering wheel of his car. He had a gunshot wound to his forehead, a small red circle. His eyes were open. I felt a wrenching feeling in my stomach, a feeling I have never felt before and have fortunately never felt since. At that point I realized that regardless of the hundreds, if not thousands, of violent acts I had seen in movies and television, none could come close to this. I had never seen the horrific simplicity of a wound like that one, never seen the true absence of expression in a person’s face. No actor I had ever seen was able to truly “do death” right, I realized. It became clear that I knew nothing about violence for the most part. Yes, I have read the research, but that knowledge was just academic; this was real.

This is not to say that violent media do not create real emotional responses. Good storytelling can create sadness and fear, and depending on the context violence can even be humorous (like the Three Stooges or other slapstick comedy). Media violence may elicit no emotional response—but this does not necessarily mean someone is desensitized or uncaring when real violence happens in our lives. It may mean that a script was mediocre and that the audience doesn’t care about its characters.

But it could be because media violence is not real and most of us, even children, know it. Sociologist Todd Gitlin calls media violence a way of getting “safe thrills.”40 Viewing media violence is a way of dealing with the most frightening aspect of life in a safe setting, like riding a roller coaster while knowing that you will get off and walk away in a few minutes.

Violence in Context: Poverty and Racial Inequality

If we want to learn about what causes kids to commit real acts of violence, depictions of media violence won’t help us much—talking with people who have experienced both will. For several years in the mid-1990s, I worked with criminologists on a broad study of juvenile violence to understand the causes and correlates of youth violence in Los Angeles.41 We wanted to understand the full context of violence in order to help develop conflict-management programs with community members and reduce levels of violence in these communities.

Usually when we talk about violence and media, it is common to defer to people


who have studied media effects—but most of these researchers haven’t studied violence itself much, if at all.42 Truly understanding the meanings of both violence and media comes from experiencing them both firsthand. Unfortunately, many young people in Los Angeles have; to find them, we went to the areas with the highest arrest rates for violent crime (not to college students or video gamers). These communities consistently had high poverty rates and gang activity and included predominantly African Americans and Latinos in low-income neighborhoods.

Initially, we conducted a survey to ascertain the level of violence in each neighborhood. We then did follow-up in-depth interviews with fifty-six teen boys, aged twelve to eighteen, who had experienced violence as victims or offenders (or both) to understand how they made sense of both real and media violence.43 Our interviewees clearly described the differences between media violence and actually experiencing violence firsthand.

Above all, their stories tell us that the meaning of violence is made within particular social contexts. For most of those interviewed, poverty and neighborhood violence were overwhelming influences in their lives, shaping their interactions and their understanding of their futures. More than three-quarters of respondents (77 percent) noted that gang activity was prominent in their neighborhoods. Slightly less than half (48 percent) reported feeling tremendous pressure to join gangs, but less than one in ten (9 percent) claimed gang membership. Eighty-eight percent heard guns being fired on a regular basis, and nearly one-third (30 percent) had seen someone get shot. More than one-quarter (27 percent) had seen a dead body in person, and 14 percent had been threatened with a gun themselves. Almost one-quarter (23 percent) had been attacked with some sort of weapon.

Through interviewing these young people, we found that the line between victim and offender is hard to draw and that violent incidents occur within murky contexts. The people we call violent offenders are not necessarily predators, looking to swoop down on the weak and innocent. Instead, we see that violent incidents often happen within a larger context of fear, intimidation, despair, and hopelessness. These kids were trying to survive in destroyed communities as best they could. Unfortunately, violence was often a part of their survival.

Critics often charge popular culture like gangsta rap, for instance, for glamorizing violence within central cities. Understanding the broader social context can help us understand both violence and the popular culture it sometimes spawns. The concept of hegemonic masculinity, where men are encouraged to strive to be dominant and powerful over women and other men, can help us understand why violence might emerge more in economically disadvantaged areas where there are few other ways for young men to feel powerful.44 Not all men seek this ideal, nor do many accomplish it; instead, hegemonic masculinity is held out as what makes a


man a “real man.” In addition to subordinating women, hegemonic masculinity demands that men show physical strength and aggressiveness, hyperheterosexuality, and emotional detachment.

As sociologist Elijah Anderson found in his ethnographic research, many young people learn to adopt a posture of violence in order to avoid being victims. And as Richard Majors and Janet Mancini Billson, authors of Cool Pose: The Dilemmas of Black Manhood in America, point out, “Presenting to the world an emotionless, fearless, and aloof front counters the low sense of inner control, lack of inner strength, absence of stability, damaged pride, shattered confidence and fragile social competence that come from living on the edge of society.”45 To Majors and Billson, the “cool pose” is a response to African American disempowerment, a defense mechanism for managing emotions in communities with high levels of violence.

As a marginalized group, African American men have historically had serious economic constraints, reducing their ability to achieve the economic domination associated with hegemonic masculinity. In our capitalist, consumer-oriented society, this creates a major sense of emasculation. According to a 2012 report from the Bureau of Labor Statistics, black men working full-time earned 77 percent of white men’s weekly earnings (by contrast, black women earned 84 percent of white women’s wages).46

Public discussions about violence often ignore these contexts. The young people we interviewed clarified several key differences between their actual experiences with violence and media violence. For one, many described media violence as gorier, with over-the-top special effects. Over and over the boys described how fear in their lives comes not from seeing blood on- or off-screen but from the uncertainty about when violence will next occur. One seventeen-year-old stated that because violence in his neighborhood was so pervasive, media violence was strangely comforting: he said at least when it occurred on television, he knew he was safe.

Another key difference in meaning is the clear distinction between good and evil in media depictions of violence. “It’s more pumped-up like, [a] heroic thing,” an eighteen-year-old informant told us. “Like most of violence on TV is like a heroic thing. Like a cop does something amazing. Like somebody like a bad guy, the violence is usually like pin-pointed toward a bad person.” Other boys described the lack of punishment in their experiences compared with media violence; law enforcement to them was not as effective as it may appear on police dramas.

A seventeen-year-old compared his experiences with the Jerry Springer show, saying, “They have security that break it up if something happens. [Nobody] is really going to get hurt that much because there probably will be two or three blows and security will hop on stage and grab the people.” He went on to describe


how, in his experience, the police were not concerned with who the good guy was, that there was no discussion, and often no real resolution. Ironically, one of the central complaints about media violence is that often there are no consequences, but our informants told us that in reality things are even worse.

These contexts help us understand why some young people of color mistrust police. For those who have had more positive interactions with police, the simmering rage sometimes reflected in rap lyrics might be hard to comprehend. Sociologist Elijah Anderson’s ethnography of African Americans’ experiences with police in a northeastern city highlights the disparity. “Scrutiny and harassment by local police makes black youths see them as a problem to get beyond,” Anderson notes, and he describes the actions of the “downtown police” as “looking for ‘trouble.’ They are known to swoop down arbitrarily on gatherings of black youths standing on a street corner. They might punch them around, call them names, and administer other kinds of abuse, apparently for sport.”47

A major concern about media violence is that it creates unfounded fear that the world is a dangerous place. Communications scholar George Gerbner describes this as the “mean-world” syndrome: by watching so much television violence, people mistakenly believe that the world is a violent place. But what about people who do live in dangerous communities? With the boys we interviewed, poverty and hopelessness gnaw away at them on a daily basis. “It’s just poverty,” an eighteen- year-old told us. “I wouldn’t recommend nobody comin’ here.… I just wouldn’t recommend it.” Not surprisingly, the majority of boys we interviewed did not find media violence to be a big source of fear. In fact, some boys said they enjoyed watching violence to point out how producers got it wrong. As experts, they can detect the artificiality of media violence.

The boys also expressed resentment when their neighborhoods are used in stereotypical portrayals. “The people that make the movies, I’m pretty sure they never lived where we live at, you know, went to the schools we went to,” explained a seventeen-year-old we interviewed. “They were, most of ’em were born in you know, the upper-class whatever, you know? I don’t think they really have experienced how we live so that’s why I don’t think they really know how it is out here.” Others explained how movies, violent or otherwise, were a luxury they could rarely afford. Besides, impoverished communities often have no movie theaters. One boy told us he never went to movies because it wasn’t safe to be out at night or to go to other neighborhoods and possibly be mistaken for a rival gang member.

Some of the boys did say that media violence made them more afraid, based on the violent realities of their communities. “If you watch a gangster movie and you live in a neighborhood with gangsters, you think you’ll be killed,” an informant said. Another respondent, who said he had to carry a knife for protection, told us, “It makes you fear going outside. It makes you think twice about going outside. I


mean, how can you go outside after watching someone get shot on TV? You know, [my friend] was just walking outside of his house and got shot. And you think to yourself, damn, what if I walked out of my house and got shot?” In both cases the fear that stemmed from media violence was rooted in their real-life experiences.

Violence exists within specific social contexts; people make meaning of both real violence and media violence in the context of their lives. It is clear from these examples that neighborhood violence and poverty are important factors necessary to understand the meanings these young people give to media violence. Other contexts would certainly be different, but when researchers or critics focus on media violence, real-life circumstances are often overlooked.

We also need to acknowledge the meaning of violence in American media and American culture. It’s too easy to say that violent media merely reflect society, or that producers are just giving the public what it wants, but violence sells. Violence is dramatic, a simple cinematic tool and easy to sell to domestic and overseas markets, since action-adventure movies present few translation problems for overseas distributors.

But in truth, violence and aggression are very central facets of American society. We reward violence in many contexts outside of popular culture. Aggressive personalities tend to thrive in capitalism: risk takers are highly prized within business culture. We celebrate sports heroes for being aggressive, not passive. The best hits of the day make the football highlights on ESPN, and winning means “decimating” and “destroying” in broadcast lingo.

We also value violence, or its softer-sounding equivalent, the use of force, to resolve conflict. On local, national, and international levels, violence is largely considered acceptable. Whether this is right or wrong is the subject for a different book, but the truth is that in the United States the social order has traditionally been created and maintained through violence. We can’t honestly address media violence until we recognize that, in part, our media culture is violent because we, as a society, are.

Challenging Media and Real Violence

Politicians, researchers, and the news media may be fascinated by media violence, but the everyday causes of actual violence often receive little attention from policy makers. Yes, media violence may be a small link in a long chain, but certainly it’s not the central link. There’s nothing wrong with media criticism—we could probably use more of it—but when media criticism takes the place of understanding the roots of violence, we have a problem. To hear that “Washington [is] again taking on Hollywood” may feel good to the public and make it appear as though lawmakers are on to something, but real violence remains off the agenda.48 This


tactic appeals to many middle-class constituents whose experience with violence is often limited.

While some fear that the content of video games and other violent entertainment may be harmful, we also need to consider the harm of diversion from the issues that politicians and policy makers could be exploring instead of succumbing to the media violence moral panic. We might ask why so many parents are afraid for their kids to play outside in their communities and why many neighborhoods have few spaces for teens to safely congregate. For many parents, violent media exposure is far less of a concern than exposure to actual violence. To understand why people become violent, we need to start by looking at garden-variety violence rather than the headline-grabbing exception.

Violence elicits fear because it sometimes may seem to defy prediction. After the high-profile shootings of the 1990s, the FBI conducted a study to produce a profile of school shooters. In the end, they couldn’t: school shootings are so rare, and they shared many characteristics with nonviolent kids—like playing video games.

This is not to say that we cannot predict what leads to violence. The majority of young people who turn to violence have a number of other risk factors that we need to focus on more: violence in the home or neighborhood (or both), a personal or family history of substance abuse (or both), and a sense of hopelessness due to extreme poverty. Specific contexts also must not be ignored; for instance, in the study of youth violence in Los Angeles I noted earlier, we found that the vast majority of homicides involving young offenders are gang related, drawing on the aforementioned problems, not video games.

Economically disadvantaged people living in racially isolated communities are most likely to experience real violence, but least likely to appear on politicians’ radar. A national focus on media rather than real violence draws on existing fears and reinforces the view that popular culture, not the decades-long neglect of whole communities, leads to violence. It provides a cultural explanation that seems to address violence, but completely overlooks social structure. It may be more interesting to think about media violence—and, ironically, more entertaining—as a cause of real violence, but without examining structural conditions like poverty, unemployment, and other factors that contribute to family disruption, we won’t get very far.

Notes 1. Brown v. Entertainment Merchant’s Association, no. 08–1448 US (2011), 2. Jeneba Ghatt, “Supreme Court Overreaches on Video Game Ruling,” Washington

Times, June 30, 2011, hood/politics- raising-children/2011/jun/30/supreme-court-overreaches-video-game-ruling/; Robert Scheer,

“The Supreme Court’s Video Game Ruling: Yes to Violence, No to Sex,” Nation, June 29, 2011, violence-no-sex; editorial, “The High Court’s Misguided Decision on Video Games,” Washington Post, June 27, 2011, misguided-decision-on-violent-video-games/2011/06/27/AGilYDoH_story.html.

3. Jennifer LaRue Huget, “Study Links Violent Video Games to Violent Thought, Action,” Washington Post, March 1, 2010,; Eryn Brown, “Violent Video Games and Changes in the Brain,” Los Angeles Times, November 30, 2011, videogame-brain-20111130; editorial, “A Poisonous Pleasure,” St. Louis Post-Dispatch, July 30, 2000, B2; Richard Saltus, “Survey Connects Graphic TV Fare, Child Behavior,” Boston Globe, March 21, 2001, A1.

4. Jonathan L. Freedman, Media Violence and Its Effect on Aggression, 200. 5. Alexia Cooper and Erica L. Smith, Homicide Trends in the United States, 1980–

2008 (Washington, DC: US Department of Justice, 2011),; Jennifer L. Truman, Criminal Victimization, 2010 (Washington, DC: US Department of Justice, 2011),

6. Cooper and Smith, Homicide Trends; Federal Bureau of Investigation, Ten-Year Arrest Trends: Uniform Crime Reports for the United States, 2010 (Washington, DC: US Department of Justice, 2011), u.s/2010/crime-in-the-u.s.-2010/tables/10tbl32.xls.

7. James Alan Fox and Marianne W. Zawitz, Homicide Trends in the United States (Washington, DC: US Department of Justice, 2000).

8. Federal Bureau of Investigation, Arrests by Age: Uniform Crime Reports for the United States, 2010 (Washington, DC: United States Department of Justice, 2011), u.s.-2010/tables/10tbl38.xls; population estimate from US Census Bureau, Population Division: Annual Estimates of the Population by Selected Age Groups and Sex for the United States, 1980 to 2010 (Washington, DC: US Bureau of the Census, 2012),; Federal Bureau of Investigation, Uniform Crime Reports for the United States, 1964–1999 (Washington, DC: US Department of Justice, 2000).

9. Lori Dorfman et al., “Youth and Violence on Local Television News in California,” American Journal of Public Health 87 (1997): 1311–1316.

10. Los Angeles Police Department, Statistical Digest 2010, Information Technology Division,

11. E. Britt Patterson, “Poverty, Income Inequality, and Community Crime Rates,” in Juvenile Delinquency: Historical, Theoretical, and Societal Reactions to Youth, edited by Paul M. Sharp and Barry W. Hancock (Upper Saddle River, NJ: Prentice-Hall, 1998), 135– 150.

12. For more discussion, see William Julius Wilson, More Than Just Race: Being Black and Poor in the Inner City.

13. Wayne Wooden and Randy Blazak, Renegade Kids, Suburban Outlaws: From Youth Culture to Delinquency; Howard N. Snyder and Melissa Sickmund, Juvenile Offenders and Victims: 2006 National Report (Washington, DC: US Department of

Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 2006), 67,

14. Cited in Glenn Gaslin, “Lessons Born of Virtual Violence,” Los Angeles Times, October 3, 2001, E1.

15. “Wrestle-Slay Boy Faces Life,” Daily News, January 26, 2001, 34; Michael Browning et al., “Boy, 14, Gets Life in TV Wrestling Death,” Chicago Sun-Times, March 10, 2001, A1; Caroline J. Keough, “Young Killer Wrestles Again in Broward Jail,” Miami Herald, February 17, 2001, A1.

16. “13 Year-Old Convicted of First-Degree Murder,” Atlanta Journal and Constitution, January 26, 2001, B1; Caroline Keough, “Teen Killer Described as Lonely, Pouty, Disruptive,” Miami Herald, February 5, 2001, A1; Tamara Lush, “Once Again, Trouble Finds Lionel Tate,” St. Petersburg Times, May 25, 2005, B1.

17. “Murder Defendant, 13, Claims He Was Imitating Pro Wrestlers on TV,” Los Angeles Times, January 14, 2001, A24. Later in media interviews, Lionel said that Tiffany was lying down on the stairs and he accidentally crushed her when he came bounding down the steps.

18. Lush, “Once Again, Trouble Finds Lionel Tate”; Abby Goodnough, “Ruling on Young Killer Is Postponed for Psychiatric Exam,” New York Times, December 6, 2005, 25.

19. See Freedman, Media Violence and Its Effect on Aggression, 43. 20. L. Rowell Huesman et al., “Longitudinal Relations Between Children’s Exposure to

TV Violence and Their Aggressive and Violent Behavior in Young Adulthood: 1977–1992,” Developmental Psychology 39, no. 2 (2003): 201–221. Kids who regularly watched shows like Starsky and Hutch, The Six Million Dollar Man, and Road Runner cartoons in 1977 were regarded as high-violence viewers.

21. Based on r=.17. 22. Jeffrey G. Johnson et al., “Television Viewing and Aggressive Behavior During

Adolescence and Adulthood,” Science 29 (March 2002): 2468–2471. 23. Lillian Bensley and Juliet Van Eenwyk, “Video Games and Real-Life Aggression:

Review of the Literature,” Journal of Adolescent Health 29 (2001): 244–257; Jeanne B. Funk, “Video Games: Benign or Malignant?,” Journal of Developmental and Behavioral Pediatrics 13 (1992): 53–54; C. E. Emes, “Is Mr. Pac Man Eating Our Children? A Review of the Effect of Video Games on Children,” Canadian Journal of Psychiatry (1997): 409–414.

24. C. J. Ferguson, “Evidence for Publication Bias in Video Game Violence Effects Literature: A Meta-analytic Review,” Aggression and Violent Behavior (2007): 470–482.

25. M. Winkel, D. M. Novak, and H. Hopson, “Personality Factors, Subject Gender, and the Effects of Aggressive Video Games on Aggression in Adolescents,” Journal of Research in Personality 21 (1987): 211–223.

26. Craig Anderson and Karen Dill, “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and Life,” Journal of Personality and Social Psychology 78 (2000): 772–790; Amy Dickinson, “Video Playground: New Studies Link Violent Video Games to Violent Behavior,” Time, May 8, 2000, 100.

27. For further problems with this study, see Guy Cumberbatch, “Only a Game?,” New Scientist, June 10, 2000, 44.

28. Anderson and Dill, “Video Games,” 22. 29. A. Lager and S. Bremberg, “Health Effects of Video and Computer Game Playing:

A Systematic Review of Scientific Studies,” National Swedish Public Health Institute, 2005.

30. Anderson and Dill, “Video Games,” 33; Marnie Ko, “Mortal Konsequences,” Alberta Report, May 22, 2000.

31. Dickinson, “Video Playground,” 100; Marilynn Larkin, “Violent Video Games Increase Aggression,” Lancet, April 29, 2000, 1525.

32. Cumberbatch quoted in Charles Arthur, “How Kids Cope with Video Games,” New Scientist, December 4, 1993, 5; Derek Scott, “The Effect of Video Games on Feelings of Aggression,” Journal of Psychology 129 (1995): 121–133.

33. Karen E. Dill and Jody C. Dill, “Video Game Violence: A Review of the Empirical Literature,” Aggression and Violent Behavior 3 (1998): 407–428; Mark Griffiths, “Violent Video Games and Aggression: A Review of the Literature,” Aggression and Violent Behavior 4 (1999): 203–212; Joanne Savage, “Does Viewing Violent Media Really Cause Criminal Violence? A Methodological Review,” Aggression and Violent Behavior 10 (2004): 99–128; Craig A. Anderson and Brad J. Bushman, “Effects of Violent Video Games on Aggressive Behavior, Aggressive Cognition, Aggressive Affect, Physiological Arousal, and Prosocial Behavior: A Meta-analytic Review of the Scientific Literature,” Psychological Science 12 (2001): 353–359; Lillian Bensley and Juliet Van Eenwyk, “Video Games and Real-Life Aggression: Review of the Literature,” Journal of Adolescent Health 29 (2001): 244–257.

34. David Buckingham and Julian Wood, “Repeatable Pleasures: Notes on Young People’s Use of Video,” in Reading Audiences: Young People and the Media, edited by David Buckingham, 132.

35. Ibid., 137. 36. Garry Crawford and Victoria Gosling, “Toys for Boys? Marginalization and

Participation as Digital Gamers”; Garry Crawford, “The Cult of the Champ Man: The Cultural Pleasures of Championship Manager/Football Manager Games,” 523–540.

37. Statistics from industry group Entertainment Software Association, (accessed on May 19, 2012).

38. Chris Zdeb, “Violent TV Affects Kids’ Brains Just as Real Trauma Does,” Gazette (Montreal), June 5, 2001, C5.

39. Jim Sullinger, “Forum Examines Media Violence,” Kansas City Star, August 29, 2001, B5; Marilyn Elias, “Beaten Unconsciously: Violent Images May Alter Kids’ Brain Activity, Spark Hostility,” USA Today, April 19, 2001, D8.

40. Todd Gitlin, Media Unlimited: How the Torrent of Images and Sounds Overwhelms Our Lives, 92.

41. I would like to thank Cheryl Maxson and Malcolm Klein for including measures in their study, “Juvenile Violence in Los Angeles,” sponsored by the Office of Juvenile Justice and Delinquency Prevention, grants #95-JN-CX-0015, 96-JN-FX-0004, and 97-JD-FX- 0002, Office of Justice Programs, US Department of Justice. The points of view or opinions in this book are my own and do not necessarily represent the official position or policies of the US Department of Justice. All interviews were conducted in 1998. The content of the interviews involved the youths’ descriptions of a selection of the violent incidents that the youths had experienced, the major focus of the study. At the end of each interview, youths were asked whether they thought television and movies contained a lot of violence. This question was posed to ascertain their perceptions of the levels of violence in media. Following this, respondents were asked whether they thought that viewing violence in media made them more afraid in their neighborhoods and why or why not. This topic helped respondents begin to compare the two types of violence and consider the role of

media violence in their everyday lives. Finally, respondents were asked to name a film or television program that they felt contained violence and compare the violence in that film or program to the violence they experienced and had described in the interview earlier. This question solicited direct comparison between the two modes of experience (lived and media violence). The subjects were able to define media violence themselves, as they first chose the medium and then the television program or film that they wished to discuss. Definitions of media violence were not imposed on the respondents. The interviews were tape- recorded and transcribed. Data were later coded using qualitative data analysis software to sort and categorize the respondents’ answers. Data were collected by random selection by obtaining a sample of addresses from a marketing organization, and households were then enumerated to determine whether a male between the ages of twelve to seventeen lived in the residence for at least six months. (Interviewees were sometimes eighteen at the time of follow-up.) It was determined that if youths had lived in the neighborhood for less than six months, their experiences might not accurately reflect activity within that particular area. They were excluded in the original sampling process.

42. Researchers who study media violence often have backgrounds in communications, psychology, or medicine.

43. No females were included because primary investigators concluded from previous research that males were more likely to have been involved in violent incidents.

44. R. W. Connell, Masculinities. 45. Elijah Anderson, The Code of the Street: Decency, Violence, and the Moral Life of

the Inner City (New York: W. W. Norton, 2000); Richard Majors and Janet Mancini Billson, Cool Pose: The Dilemmas of Black Manhood in America, 8.

46. Bureau of Labor Statistics, “Median Weekly Earnings by Race, Ethnicity, and Occupation, First Quarter 2012,” April 19, 2012 (Washington, DC: US Department of Labor, 2012),

47. Elijah Anderson, Streetwise: Race, Class, and Change in an Urban Community, 197.

48. Megan Garvey, “Washington Again Taking on Hollywood,” Los Angeles Times, June 2, 2001, A1.



Pop Culture Promiscuity Sexualized Images and Reality

When Disney sensation Miley Cyrus, star of Hannah Montana, appeared in Vanity Fair in 2008, a firestorm of criticism erupted. The fifteen-year-old posed provocatively, wearing only a bed-sheet in some of the photos. A year later, she performed at the Teen Choice Awards while dancing on a pole. Was she a bad role model for young girls, a young woman seeking to shed her Disney image, or both? Do representations of sexuality encourage teens to become sexually active?

We live in a time when virtually nothing is off-limits in pop culture, and the most private information about celebrities’ love lives becomes tabloid fodder. Many adults fear that sex is no longer a big deal to kids, and young teens are casually “hooking up” and “growing up faster than ever.” Books like Teaching True Love to a Sex-at-13-Generation suggest that today’s entire generation of kids is sexually active before high school. Their evidence? They look to the media: pop culture is full of sex, and therefore so must be kids.

As we will see in this chapter, while popular culture may be awash in sex, young people are not nearly as sexually active as many fear. It may seem like a given that teens think sex is just another way of saying hello, thanks to news about “sexting”—sending racy pictures via text or posting them on Facebook or Twitter. The Boston Globe and other news sources describe young teens’ sexy Halloween costumes—including some dressed as prostitutes—and pretty soon it seems like a trend.1 After all, who hasn’t seen a young teen or tween wearing an outfit that is anything but age appropriate?

Daytime talk shows have long featured promiscuous teens as problems that their parents can’t handle. Topics like “My teen is going on a date” might be more applicable to regular kids, but probably not a big ratings grabber. Horror stories of teen promiscuity make the news and appear to support the popular hypothesis that kids now are morally depraved and imply the media are at fault.

Note that stories about promiscuous adults aren’t common, but headlines like “Don’t Let TV Be Your Teenager’s Main Source of Sex Education,” “Grappling with Teen Sex on Television,” “MTV Show Promotes Teen Sex, Drug Use, Experts Say,” and “Racy Content Rising on TV” help create an atmosphere of anxiety among adults who fear that the rules have changed and that young people are becoming more promiscuous than ever.2

“Kids pick up on—and all too often act on—the messages they see and hear around them,” wrote sex educator Deborah Roffman in a Washington Post article


titled “Dangerous Games: A Sex Video Broke the Rules, but for Kids the Rules Have Changed.”3 Interesting that we don’t level the same charges against adults, who are more likely to be sexually active, and more likely to be rapists and sex offenders, than teenagers.

Roffman’s article featured the story of a teenage boy from the Baltimore area who videotaped himself having sex with a classmate and then showed the video to his friends. Certainly, this story is troubling, but also troubling is the supposition that this incident is representative of all young people, whose rules of proper conduct have allegedly changed. We wouldn’t dare make the same sweeping generalizations about equally appalling adult behavior. But Roffman is not surprised: “What else do we expect in a culture where by the age of nineteen a child will have spent nearly 19,000 hours in front of the television … where nearly two-thirds of all television programming has sexual content?”4

There are several things wrong with the assumption that sexual content from television led to this sex video. First, if our television culture is so sex laden and causes such inappropriate behavior, we would expect even more incidents like this, but this case was enough of an anomaly that it made headlines. Clearly, the story received media attention based on its shock value and its rarity. Second, the “19,000 hours” is an average, and perhaps a dubious one at that. How many hours of television did you watch last week? Last night? Personally, I have no idea, and neither do a lot of people who respond to surveys that statistics like these are derived from. The amount of viewing tells us nothing about the content itself. Besides, we have no idea if this kid even watches television—typically, television viewing declines in adolescence, and adults tend to watch more television than young people do.5

Finally, “sexual content” in such studies is often broadly defined to include flirting, handholding, kissing, and talk about sex, so the “two-thirds of all television programming” estimate is questionable at best. Roffman compared the incident to the 1999 film American Pie, where the lead character broadcast a sexual encounter over the Internet. However, there is no proof the Maryland boy even saw this movie.

It is far too simplistic to blame raunchy movie scenes for changes in sexual behavior. This chapter challenges this fear by examining why sexual attitudes have changed over the past century. The way we think about sex has changed much more than the actual behavior. We will see that youth of today are not nearly as promiscuous as some might fear. Rather than being blindly influenced by sex in media, teens are actively involved in trying to figure out who they are in a culture that might offer a lot of sexual imagery but little actual information about intimacy, sex, and sexuality. In a 2012 survey, 38 percent of teens reported that their parents were the most influential when it came to making decisions about sex. Only 9


percent cited the media.6 When we take a closer look at how young people make sense of sexuality in

media, we see that they are not simply influenced by popular culture, but use sexual representations to create identity and status within their peer groups. As we will see, changes in economics and demographic shifts during the past century have driven changes in sexual attitudes and behavior. But sexuality has always been part of coming of age, and parents have always felt anxious about this passage. Adults’ declining ability to control children’s sexual knowledge has created a high level of fear, and popular culture—the source of what seemed like secret information in the past—is often blamed instead of structural conditions and social changes.

The Sexual Revolution: Blame Social Structure, Not Popular Culture

Not too long after the motion picture camera was invented, someone filmed the first sex scene. Contrast this Victorian-era development with prevailing Comstock laws, which made any reference to birth control sent through the mail legally obscene and a violation of federal law. Now you have a basic understanding of the way in which concerns about sex in media have always existed—and have always been contradictory.

In spite of the common belief that sexually laden popular culture is something new to recent decades, film content in the 1920s featured stars like Rudolph Valentino’s passionate kissing, occasional female nudity, and sometimes even orgies. Because most of these films were not preserved and are rarely screened, we can easily forget they existed. But critics were equally as concerned about morality and popular culture a century ago.

And like today, Hollywood’s early stars engaged in sometimes scandalous behavior, including wild parties, frequent failed marriages, and sex scandals. Most notably, Roscoe “Fatty” Arbuckle was tried for rape and murder when a woman died after visiting his hotel room in San Francisco. Although the cause of death was later determined not to be homicide, Arbuckle’s notoriety made conservative groups concerned about the impact this growing industry would have on American society. Movies were a new source of influence that religious leaders feared would bypass the family, school, and religion in importance in young people’s lives. If this sounds familiar, it is because things haven’t changed as much as we might think.

Politicians and activists called for government censorship. Instead, the new film industry promised self-regulation by creating a special organization to monitor movie content and ensure movies met what were deemed acceptable moral standards. The new code, formally implemented in 1934, was led by prominent political figure Will Hays. What came to be known as “the Code” restricted film content to what the Hays Office deemed “wholesome entertainment.”7


But the Code censored much more than just sex. Rules governing film production were overtly racist—no interracial relationships were allowed—and any criticism of the status quo was interpreted as a violation of the “moral obligation” of the entertainment establishment. The Hays Office justified extremely rigid boundaries of filmmaking in the name of preserving children’s “morality.” Films that critiqued corporations and capitalism were deemed un-American; content that seemed to criticize “natural or human” laws could potentially incite “the lower and baser element” of American society.8

The Code dominated film production until 1966. With competition from television cutting into box-office revenues, films started presenting sexuality more frankly beginning in the 1960s, particularly as European New Wave films by directors like Federico Fellini and Jean Luc Godard helped redefine movies as art. Twenty years later, cable television and the VCR brought more sexually explicit programming directly to consumers, bypassing network television standards and practices. In order to keep their dwindling market share, networks have had to compete by offering content that keeps our attention when so many other things might tempt us to look elsewhere.

Why is there so much sexual imagery in media culture today? Blame market forces. Sex has become another product of contemporary society, circulating more rapidly and difficult to control and regulate because highly sexualized images attract attention and profit.

Popular culture may be different today than in the past—yes, Lucille Ball couldn’t say the word pregnant on her 1950s television show. But it does not explain social changes. We didn’t arrive here on the coattails of television or movies; popular culture incorporates and reflects societal issues and values, many of which some people find objectionable. Instead of targeting attention solely on popular culture, we need to first understand the social context of sex in twentieth- and twenty-first-century America.

Our concerns about sexually active young people are by no means new—during the 1920s adults were horrified by the short dresses young women wore and the “petting parties” young people attended, as well as what was going on in the backseats of the new horseless carriages. So why the often rose-colored view of sex in the past?

Many older adults today came of age between the 1930s and 1950s and were not entirely chaste as teens themselves. But the movies and television of their era were, and this is what we mostly think of when we think of the middle of the twentieth century. Chances are, if history is any indicator, in about fifty years people will look back at today as an age of innocence, too. When it comes to young people and sexuality, the past has always seemed more innocent because it is viewed through the lens of nostalgia.

Rarely does a history book mention teen sex as a major concern of the day, nor


do our grandparents typically discuss this aspect of their lives when recalling their youth. But rather than significant differences in behavior, what has seriously changed is the expectation of who teens are. Until the twentieth century, most teens were likely to be regarded as near adults with full familial and economic responsibilities. They worked, married, and raised children—siblings or their own —at much earlier ages than many of us do today.

The experience of adolescence, a middle period between childhood and adulthood, emerged as the outcome of industrialization and the diminished necessity for people in their teen years to join the labor force. The time before adulthood steadily lengthened throughout the twentieth century, as did the gap between sexual maturity and marriage. Socially and sexually, we expect teenagers today to function partially as adults and partially as children. The roles and expectations of adolescents today are far different from those of their counterparts a hundred years ago, and even from those of their grandparents at mid-century, when many young people married right after high school. Teen sex was very common: it just took place after a wedding (or precipitated one). Yet today we hope that people who are sexually mature don’t engage in sexual behavior before socially defined adulthood, despite the fact that children—particularly girls—reach physical maturation earlier than in midcentury.

Along with changes in the experiences and meaning of adolescence came different beliefs about courtship and dating. During the “good old days,” adults shared many of the same fears that today’s parents have, that young people were engaging in behavior they never did at their age and that kids had too much freedom and not enough sexual restraint.

Rather than the “revolution” we are so often told happened in the 1960s, young people’s sexual behavior steadily changed throughout the twentieth century, as have perceptions of how parents should deal with the coming-of-age issues of dating and sexuality.9 Yes, birth control became much more widely discussed with the advent and distribution of the birth control pill, but it was mostly available only to married women at first. And in states like Connecticut, any use of birth control was illegal until the 1965 Supreme Court ruling in Griswold v. Connecticut.

It is nearly impossible to understand these changes without considering the economic context. Courtship began to change with the rise of industrialization and marked the gradual decrease of adult control. Before World War II, American child-rearing practices reflected the belief that controlling children’s behavior could prevent any untoward sexual exploration later in life. Additionally, parental supervision of courtship was much simpler in rural life, where work was more likely to be closer to the home. A suitor might call on a potential mate at her home with parents or chaperones very close by.

Industrialization led to the growth of cities and took adults away from the home for longer periods of time. The possibility for supervision decreased, as did the


amount of space a family might have had in which courtship could take place. Dating moved from the parlor to the public sphere and progressively became more of an independent pursuit with less family intervention, particularly as marriage became more about romantic connections and less tied to making good economic matches between families.10

Factor in that the widespread availability of electricity at the beginning of the twentieth century enabled nightlife to emerge away from the family home and that the automobile became an important part of American dating. Having a car provided more privacy and the ability to travel even farther from parental supervision. Drive-in restaurants and movies as well as lovers’ lanes are examples of semiprivate settings where teens went to be away from parental supervision.

Highly populated cities offered more anonymity, and the expansion of suburbs following World War II created even more space for young people to congregate away from adults. The 1950s economic boom created the possibility for many people to experience youth as a time of leisure, whereas the previous generation was much more likely to be in the labor force. Young people likely had fewer responsibilities than their parents had before them, and childhood and adolescence were increasingly seen as time for fun.11 Dating became associated with recreation rather than procreation, as the search for a spouse became a more distant concern.

Historian Beth L. Bailey describes how remaining chaste before marriage gradually lost its economic value in the marriage market, particularly as women had more opportunities to become self-supporting.12 The new affluence of the postwar era led to higher rates of high school and college attendance, which increased the physical distance between adults and young people, as well as increasing the opportunities for couples to be alone.

The influence of Sigmund Freud and Benjamin Spock in the post-war era also altered perceptions about sexuality and childhood. Both Freud and Spock considered children inherently sexual, so sexual curiosity was natural, even necessary for healthy development. Unlike the prewar notion that control created a well-adjusted child, postwar advice urged parents to avoid shaming their children. Parents were encouraged to provide information about sex, a major shift from prewar practices.

Contrary to our collective nostalgia suggesting otherwise, premarital sex did occur before the so-called sexual revolution of the 1960s; it was the reaction to premarital sex that changed. In midcentury, for instance, if sex resulted in pregnancy, it was more likely to remain secret through a quick marriage, a forced adoption, or, in some situations, an abortion disguised as another medical procedure.13 The main difference now is that we are more likely to acknowledge both premarital sex and teen pregnancy than in the past. Teenage girls today are less likely to be pressured into early marriage and more likely to have access to birth


control and information about sex. Starting in the early 1970s, a backlash against the new openness began.14 This

new sexual openness led to fears that a more accepting approach to childhood sexuality had gone too far, that the lack of shaming in early childhood led to less restraint against premarital sex, blamed for the counterculture’s freewheeling views toward sex and drugs.

These concerns about openness blamed behavioral changes on the availability of information and did not take into account the demographic, economic, and political changes of the twentieth century. Our contemporary ambivalence about sexuality was born, as were complaints that the media make them do it. Today American adults want young people to be both psychologically healthy and sexually restrained, which is why we are at best ambivalent about providing children with information about sex. In many schools, sex education now is often just abstinence education.

Rather than a secret shame, sex today is out in the open. It’s on talk shows, in newspapers, blogged about, tweeted, and texted. Talking about sex and sexuality was once taboo; we do a lot of it now. But talk is not the same as action.

More Media, Less Promiscuous Teens

Just as sexuality seems like a new invention for each generation of teens, the fear of teen sexuality is renewed in each adult generation, particularly as new forms of media and communication technology enable young people to evade parental control.

In recent years, public concern has swirled around teens using social media and smartphones to post and send sexually charged messages or images. “Sexting”—a media-manufactured term if there ever was one—has become a big topic on infotainment programs. Google “teen sexting” and you will find more than a million hits, with many of the news stories about the alleged danger of this supposedly exploding new phenomenon.

Talk of charging teen “sexters” as sex offenders or with possession of child pornography, suicide following a missent “sext,” and other horror stories of images posted online provide a frightening backdrop to our ever-changing media environment, one in which supposedly sexually out-of-control teens don’t know any better than to send risqué photos of themselves.

There’s just one thing that’s often left out of these modern-day morality tales: most teens don’t “sext.” A Pew Internet and American Life study conducted in late 2011 found that only 2 percent of twelve- to seventeen-year-olds had ever sent nude or seminude photos of themselves. By contrast, more than double the percentage of people their parents’ age admitted that they had: 5 percent of thirty- to forty-nine-


year-olds sent racy pictures.15 Teenage sex is by now a cliché associated with irresponsibility, disease,

promiscuity, and unwanted pregnancy, while adults are often considered more mature and capable of self-control. We claim that teens have trouble controlling themselves due to raging hormones, implying that promiscuity is somehow natural and inevitable. (Ironically, while we blame biology for shaping teen behavior, we condemn young people for doing what allegedly comes naturally.) Meanwhile, we ignore the majority of teens who are responsible or do not engage in sex, and we don’t stereotype promiscuous adults as hormone-crazed animals.

The Centers for Disease Control and Prevention (CDC) studies teen’s sexual behavior in its “Youth Risk Behavior Surveillance System” (YRBSS) (emphasis mine), but rarely do we study adults’ sexual behavior as risky.

Even teens themselves think their peers are having sex more than they really are. A survey conducted by the National Campaign to Prevent Teen Pregnancy found that more than half of teens overestimated the percentage of their classmates who are sexually active.16 Even teens might be surprised to learn that their age group is actually less likely to be sexually active than teens were twenty years ago. According to the CDC’s YRBSS study:

• The rate of high school students who have ever had sexual intercourse declined from 54 percent in 1991 to 47 percent in 2011.17

• For those sexually active, condom use increased from 46 percent in 1991 to 60 percent in 2011; 87 percent reported using some form of birth control during their last sexual intercourse.18

• The birthrate for teens fifteen to seventeen fell to seventeen per thousand in 2010, the lowest rate in the nation’s history.19

• Teen abortion rates dropped 59 percent between 1988 and 2008 to an all-time low since becoming legal in 1973.20

• Despite claims of a widespread “hookup culture,” where teens regularly have sex outside of relationships, the majority of both males (56 percent) and females (70 percent) report that their first sexual experience was within a steady relationship.21 Just 10 percent of teen females and 12 percent of males report having had heterosexual oral sex but not intercourse.22

While media critics tend to focus on popular culture, the decisions to have sex, use contraception, and, if pregnant, have a baby or an abortion are complicated ones. But some clear patterns exist. Regardless of ethnicity, males are more likely to claim sexual experience, which may tell us more about their perceptions of


masculinity than their actual behavior. African American males are significantly more likely to report being sexually active than any other group.

As sociologist Mike A. Males points out in his book The Scapegoat Generation: America’s War on Adolescents, this does not necessarily mean they really are having sex, but they may feel pressure to report that they are. African American males are significantly more likely to report that their first sexual encounter took place before they were thirteen; in 2011 21 percent claimed early sexual activity. Among females only 7 percent of African American and 3 percent of Latina or white females report the same early sexual onset. By contrast, 11 percent of Latino boys and 5 percent of white boys claim to have had sex before age thirteen. A 2008 study found that African American males experienced a rise in self-concept after becoming sexually active.23 Definitions of masculinity are often related to sexual experience, particularly among male peers, reflecting the concept of hegemonic masculinity I discussed in prior chapters. Despite the common belief that kids are having sex earlier and earlier, only 13 percent of American teens report having had sex before age fifteen, and 6 percent reported having sex before age thirteen.24

Teen birthrates also reflect measurable racial and ethnic differences. The birthrate for African American teens is nearly triple and the birthrate for Latina adolescents nearly four times higher than for white and Asian American teens.25 As I will discuss in greater detail in the next chapter, teen birthrates have declined dramatically over the past two decades, with the most dramatic drop in African American teen births from eighty-two per thousand in 1990 to thirty-two per thousand in 2009.26

There are also racial and ethnic differences regarding sexual health. As a 2006 study found, just one-third of sexually active African American males and less than half of African American females had information about contraception before they had sex; African American teens also contract HIV/AIDS at a rate far higher than their peers.27 Regardless of race, fewer teens today learn about birth control at school, thanks to the political support of abstinence-only education. According to a 2012 Guttmacher Institute Report, nearly a quarter of teens never received formal instruction about contraception; by contrast, in the mid-1990s less than 10 percent had not learned about birth control in school.28

While popular culture provides some titillating explanations for teen sex, others are decidedly more mundane. Family monitoring, support, and communication are also important predictors of teens likely to hold off on having sex. Using drugs and alcohol is also associated with early sexual initiation.29

Poverty is also a risk factor for sexual activity; this might seem counterintuitive, because early births also aggravate the experience of poverty, but for those who see


little hope of college or a career ahead, the risk of pregnancy may seem less of an imposition than for middle-class or affluent teens who have aspirations that may feel unrealistic to a young person growing up poor. Sociologist Mike A. Males analyzed teen birth trends and found that we can best predict the teen birthrate by tracking adult birthrates and poverty rates; teen birthrates mirror similar adults’ rates, not changes in media or abstinence education.30 Simply charging poor people with personal failure helps us avoid examining why the link between teen motherhood and poverty is so strong, which I discuss in the next chapter.31

In order to understand teen sex, we need to consider the role that adults play. Males points out in Scapegoat Generation that adult men ages nineteen to twenty- four are far more likely than teen boys to father children born to teenage girls, and adult men, not teenage boys, are most responsible for spreading HIV and other sexually transmitted diseases (STDs) to girls.32

This reality highlights the inadequacy of the term teenage sex. We overlook the role adults, particularly adult men, play in teen pregnancy and the spread of sexually transmitted diseases. Adult men are responsible for six out of ten births to girls eighteen and younger. Also, because the HIV infection rate for teen girls is double the rate of teen boys, it is unlikely that teen boys are responsible for this disparity.33

In a way, our society enables adult sexual involvement with teens. South Carolina’s age of consent is just fourteen for girls, but sixteen for boys. If this seems very young, English common law, on which American laws are based, originally set the age of consent as young as ten. In most states, the age of sexual consent is now sixteen. Some states, like South Carolina, even have separate ages for males and females, making it legal for an adult to have sex with a sixteen-year- old girl but not a boy of the same age. In these states teens cannot vote or purchase alcohol, but adults can have sex with them without legal recourse. Now passing laws certainly does not make people have sex or not, but it tells us about what age lawmakers think that it is okay for adults to engage in sex with teen girls. The numbers indicate that adults are very much a part of the “teenage” sex equation.

Finally, and perhaps most significantly, we overlook the role of sexual abuse in the discussion about teens and sex. For many young people, sex is not a choice they have made, but was forced upon them. A recent CDC study found that 18 percent of women who had sex before the age of fifteen considered the experience “unwanted”; just 9 percent of males did. Women who had sex before the age of twenty were most likely to regard the experience as unwanted if their partner was at least three years older.34

Adolescents who have been sexually abused as children are also far more likely to engage in riskier sexual practices in the future. As researchers Debra Boyer and David Fine found in their comprehensive study of teens who became pregnant, 55


percent of their respondents had been molested, and 44 percent had been raped prior to their pregnancy. The authors noted that “sexually victimized teenagers began intercourse a year earlier, were more likely to use drugs and alcohol and were less likely to practice contraception.”35 Yet much of our research on sex continues to focus on popular culture, ignoring the complex roles politics, race, and poverty play in the teen-sex equation.

Studying Sex: Media Research Makes News

In addition to violence, American media-effects researchers continually attempt to find negative connections between teen’s sexual behavior and media. Using correlation studies, which measure relationships but cannot assess cause and effect, several studies have claimed that watching sexual content leads to actual sex by teens.36

One very interesting study, published in the journal Pediatrics, interviewed nearly eighteen hundred young people aged twelve through seventeen about their sexual experiences, television viewing, and other factors that may lead to earlier sexual behavior. They then reinter viewed them one year later to see what factors were most associated with sexual advancement. While they found that watching sexual situations on television is associated with more sexual behavior, other important factors, like age, parents’ education, and scoring high on a “sensation- seeking” scale, were actually stronger factors in predicting sexual behavior.

Other important predictors, including having many older friends and engaging in other risky behaviors, are included in the analysis, but the authors don’t discuss these issues in their recommendations. The authors acknowledge that they cannot really assess cause and effect here; teens thinking about having sex may be more likely to watch sexual situations on television. Nonetheless, they conclude that reducing sexual content on television would delay teen sex.37

Researchers’ continued focus on television as the main problem, even when their own research offers more important findings, reinforces the public’s belief that television and media are the keys to change. Likewise, the Kaiser Family Foundation (KFF) has conducted several studies of sex on television, mostly looking for reasons to attribute risky and dangerous teen sexual behavior to popular culture. Examination of studies like these reveals our tendency to underestimate social structure and overstate the power of popular culture.

While the authors of the most recent KFF study, published in 2005, note the importance of peers, parents, and schools in sexual socialization, television is the main focus of their study. Researchers analyzed more than a thousand television programs, counting incidents they deemed sexual in nature.38 By using content


analysis, researchers determined the meaning of sexual messages of these programs, yet they interviewed no young people to ascertain how teens actually interpret these messages. This method isolates meaning from the context of both the program and the audience, a problem the researchers don’t seem to be worried about.

Additionally, this study broadly defines sexual messages to include seductive gestures, flirting, alluding to sex, touching, kissing, and implication of intercourse. When the incidents get boiled down into statistics, hugs and handholding can appear the same as more explicit representations of sex. Their biggest finding: 70 percent of the shows they included in the 2005 study had some form of “sexual” content, up from 64 percent in 2002 and 56 percent in 1998.39

The authors surmise that “televised portrayals of intercourse play a role in socializing young viewers to the patterns of behavior that are normative in our culture.”40 But the study never tests its assumptions about youth empirically in this or their previous studies. The researchers seem to presume that young viewers will be heavily influenced by television, completely negating young people’s ability to interpret media images on their own or the role of other factors in deciding whether to have sex.41 The authors note that media are an important source of information about sex for teens, but media sources are not where young people get most of their knowledge. Within their 2001 report, the authors noted that only 23 percent of teens say they learn “a lot” about pregnancy and birth control from television, while 40 percent have “gotten ideas on how to talk to their boyfriend or girlfriend about sexual issues” from media.42

In the third paragraph of the 2001 KFF report, the authors note almost as an aside that many teens feel they do not get enough information about sex from parents or teachers. Rather than focusing on this point, they continue to study the media issue. Adults should focus on enriching, rather than restricting, young people’s knowledge about sex and start by dealing with what they do know. Media get flak for leaking what we like to think of as adult information, which we are either too embarrassed or unwilling to share ourselves.

The authors of the 2005 study cite the risky sexual behaviors some adolescents engage in to support the need for their research. Although they preface their findings by acknowledging that teen pregnancy has declined, they go on to emphasize negative behaviors: the percentage who do get pregnant (including young adults eighteen and nineteen) or who contract sexually transmitted diseases.

As I noted in the previous section, most sexually active teens reported using condoms, yet the authors of the KFF report chose to invert these statistics to tell the negative story, focusing on how many teens did not use a condom during their last sexual encounter. Dangerous behavior is of course important to examine, but perhaps the biggest problem here is that we focus on adolescent risk and fail to put


it in the context of adult behavior. For instance, the 2006 General Social Survey found that 23 percent of adults used condoms during their last sexual encounters.43 By ignoring adults within the media-sex panic, we pathologize teen behavior even if it is consistent with (or better than) that of adults.44

In sum, this study found that sexual content (as the researchers define it) on television rose from a similar study three years before, but so what? We are left with no information about how young people actually interpret and make sense of these programs. It is important that we find out how young people interpret sexual images in advertising, music, and television in their context, and in their own words. If we are so concerned about teen sexuality, we need to talk with them, not just about them, to learn more.

A British study did just that, yet received none of the media attention the KFF study did. “Talking Dirty: Children, Sexual Knowledge, and Television” critically examined how children make sense of representations of sexuality and romance on television.45 Yes, initiating conversations about sex with children is difficult; it is considered morally questionable in some situations. Likewise, providing sex education in schools is frequently the subject of fierce debate. Maybe that’s why studies conducted by the Kaiser Family Foundation focus only on television. Talking about sex is considered indecent where children are concerned, enabling us to maintain the illusion that they can be separated from the rest of the world.

No doubt, this is why the British authors chose a provocative title for their study, which takes a rather different perspective on media and young people than the KFF study. “Talking Dirty: Children, Sexual Knowledge, and Television” was published in the journal Childhood in the spring of 1999 with no American fanfare. Not surprising, considering that the authors challenge our assumptions about childhood and sexuality at every turn.

The authors critique the belief that television is responsible for the loss of childhood innocence and argue it is best to find out what children think rather than continue to focus on what adults wish they didn’t know. The researchers sought to find out how the children they studied made sense of the programs they watched and how they understood the content in the context of their own lives. Unlike the KFF study, which presumed heavy influence, these researchers were interested in how children negotiated their social roles as children dealing with a subject that is regarded as off-limits for them. The research team sought to find out what the children knew and how they made sense of it on their own terms, avoiding value judgments in the process.

Second, in contrast with traditional views that we need to pay attention only to teenagers when it comes to sex, these researchers talked with six-, seven-, ten-, and eleven-year-olds in small groups, asking them to talk about what programs they liked and disliked. Children were then asked to sort a list of program titles into


categories, which enabled researchers to see how the children defined adult content. They then compared these shows with programs that the kids considered appropriate for children, teens, or general audiences. The researchers never brought up the topic of sex themselves, but the children occasionally did when defining what made a program “for adults.”

Researchers found that although most children felt that programs with romantic themes were “adult” shows (like daytime talk and dating shows), the ten- and eleven-year-olds were quite familiar with these programs and others like them. Kids reported that adult shows were appealing because they knew that they were supposed to be off-limits. Gender was also a factor here: younger boys were likely to deny any interest in shows with kissing or romantic themes; that was “girl stuff,” which they wanted no part of. The authors also observed that children feigned shock or disgust about romantic scenes, as the kids properly performed the role of children, supposedly ignorant about all things romantic or sexual.

Rather than advance the narrow view that sexual content does something to children, the researchers also found that children use talk about sex in popular culture to build peer connections and to make sense of sexuality from a safe distance. Children use adult themes from television to try to demonstrate adult-level competence and knowledge. The researchers concluded that neither television nor audiences “hold anything approaching absolute power. Television obviously makes available particular representations and identities.… In defining and debating the meanings of television, readers also claim and construct identities of their own.”46 Young people may borrow ideas from popular culture as part of an interactive negotiation process where children seek acceptance and status from their peers. Although popular culture is an important part of this undertaking, it is not the all- powerful force many adults fear.

Several American studies also demonstrate the importance of sexuality within elementary and middle school children’s peer groups. For instance, sociologists Patricia Adler and Peter Adler studied children’s peer groups for eight years and concluded that we need to understand the process of how preadolescents navigate peer cultures. Rather than simply negative sources of peer pressure, the Adlers found that children negotiate individual identities while striving to maintain status among peers, and sexual themes are interwoven into this process. Adults somehow fail to acknowledge (or remember) that curiosity about sexuality is a big part of growing up.47

Similarly, in her study of elementary school students, sociologist Barrie Thorne discusses how games like “kiss and chase” demonstrate that children actively construct meanings of heterosexuality through play.48 We might deem this sort of behavior innocent child’s play, but that would ignore how children themselves define their experiences. How many of us played chasing games like this, where


girls and boys excitedly try to catch and kiss each other? Thorne details how children’s play incorporates heterosexual meanings into everyday occurrences and shapes male-female interactions. She found that kids accused peers of “liking” a student of the other sex in order to police gender boundaries and sanction crossover behavior from time to time. Think of how popular rhymes (“Susie and Bobby sitting in a tree, K-I-S-S-I-N-G”) among girls highlight the importance of romantic connections within children’s games.

Obviously, as kids get older, sexual content takes on different meanings. In a study of middle school students, researchers found that boys regularly recounted sexually explicit scenes from movies for their peers in order for the storyteller to solidify his rank in the group.49 The boys in this study used discussions of movies to reinforce their perspective of women, learning to at least appear to sexually objectify them in order to avoid rejection by their peers. For many boys, reinforcing male dominance and objectifying women lead to popularity.

This example demonstrates that young people are not simply influenced by popular culture; they negotiate meaning within the context of their friends and within the larger structure of social power. There is a problem when boys must adopt very narrow versions of masculinity to fit in with one another. But if we were to somehow totally succeed in keeping children away from these sorts of films, or even do away with all such representations of sexuality in popular culture, we will have done nothing to address gender inequality, which is at the root of their status- seeking behavior.

The media did not initiate women’s objectification, but we see it most clearly there. Popular culture is often where we see reflections of power and inequality. It is naive to think that the next generation reproduces this shallow form of sexuality only because they see it in movies or on TV. They are part of a society where gender inequality is replicated in many social institutions, including education, religion, government, and the workforce. Our popular culture shows us some of the ugly realities of our society but is not where these realities originate.

In my own research with high school students, I found that teens discuss sexuality in media differently depending on the context: in groups composed of mostly males, they collectively celebrated sexual images of women in advertising, while mainly female groups tended to challenge the objectification of women.50 One male student in my study (whom I’ll call Scott) stood out the most. Scott appeared to be sixteen, slender, and perhaps the class intellectual. After viewing an Evian commercial with a young, attractive woman swimming in a pool, Scott eagerly announced that he wanted to “buy this girl.”

In truth, it seemed his intention was to fit in with his more athletic male peers than it was to demean women. As his male classmates laughed, he continued, saying, “I just want to go buy Evian with that girl swimming in it … I just like her


commercial!” His peers met Scott’s comments with supportive amusement. Interestingly, boys in predominantly female groups tended to agree with their female classmates that the ad’s use of a scantily clad woman offended them. The teens I studied clearly demonstrate how the meaning of popular culture is created collectively in the context of peer culture, a negotiation process that goes way beyond simple cause and effect.

As this example demonstrates, sometimes young people talk about sexuality in order to bolster their status among their friends. Rather than only criticize the quantity of sexual images in the media, providing more opportunities for young people to critically discuss these images is a way to better understand underlying beliefs about sex and gender. Instead of young people simply learning about sex in the media and then acting on what they watch, preteens and teens try to make sense of what they see in the context of their other experiences. Seeing all these images of sex does not necessarily mean that children interpret them by having sexual intercourse.

Coming to terms with sexuality in popular culture and their lives is a major part of adolescence and preadolescence as well. While many adults are concerned that kids are becoming sexually aware too early, we cannot place all of the blame on media. Although this is tempting, we must realize that sexuality is not just a consequence of media culture, but also a part of growing up.

The Social Construction of Sex and Gender

Highly sexualized imagery in popular culture matters, but reductive cause-effect, monkey-see/monkey-do arguments minimize its importance. People may not imitate media culture to the degree that many fear, but this doesn’t mean that media content is irrelevant.

Instead, popular culture can help us understand the dynamic way in which both sex and gender are socially constructed. It is difficult to think of sexuality as anything but personal and individual, but the way we understand sex is socially constructed. We are not merely socialized by media messages as much as they reflect ongoing struggles surrounding the meanings of sex and gender, meanings that are often contested and reflect broader issues of social change. Sexuality is a central site where struggles over social power take place.51 So while sexuality is personal, the uses and meanings attached to the practice are decidedly social and linked with broader systems of power, gender, race, and class.

Communications scholar Susan J. Douglas describes how highly sexualized images of women in media are a form of “enlightened sexism,” reflecting a backlash against women’s improved social status within the past half century. She argues, “Enlightened sexism is a response, deliberate or not, to the threat of a new


gender regime. It insists that women have made plenty of progress because of feminism—indeed, full equality has allegedly been achieved—so now it’s okay, even amusing, to resurrect sexist stereotypes of girls and women.”52

When we think about highly sexualized media content through a structural framework, we can see that such imagery is about more than just luring people to become sexually active young—which statistically we can see isn’t happening. Instead, a highly sexualized media culture is one that can reinforce a narrow version of femininity, one that still emphasizes women’s sexual appeal as central to her value. Not only do these images serve as a form of resistance to changes in the gender order, but by insisting on an ongoing quest to become and remain “hot,” there are also countless products that can be sold. Our commercially based media require personal dissatisfaction to move merchandise.

Not only do media images reflect an ongoing power struggle, but debates about sex do as well. Controlling information about sex has historically been used in order to maintain dominance over others. Withholding knowledge about birth control keeps many women in developing countries in poverty, and withholding information about sex from children is a way to maintain adult authority.

An ongoing and unresolved political debate, whether and when children should have comprehensive information about sex, reflects this conflict. And whereas it may seem like keeping information from kids protects them, it can be more dangerous than we might think. As several studies have found, abstinence-only education does not effectively delay teen sexual onset. Teens who take virginity pledges are less likely to use condoms when they do have sex and are just as likely to contract STDs as their peers.53 By trying to keep information about sexual health from young people, adults are actually putting them at greater risk.

Second, maintaining the myth of childhood innocence is not simply a benign fantasy; it can be a dangerous one. Sexually curious or sexually knowledgeable kids are defined as lesser children, or, as media studies professor Valerie Walkerdine put it, “virgins who might be whores.” A child with knowledge of sex is considered damaged, spoiled, and robbed of his or her childhood, when in fact their knowledge may stem from sexual abuse.54 Most important, clinging to the notion of childhood innocence serves to further entice those who exploit children. Defining children as pure and powerless ironically sets some children up for abuse. Abusers are often titillated by innocence, which our cultural construction of childhood unconsciously supports.55

Virginity has served as a sexual commodity for centuries, increasing female value on the marriage market in the past and fueling male fantasies in the present. Innocence serves as a sexual marker denoting increased desirability, reflecting the traditional gender order where women’s passivity and lack of experience are prized and reproduce patriarchal power. In recent years female teen celebrities


have seen their virginity used as part of a marketing strategy and their transition to adulthood marked by a highly sexualized turn. Rather than simply their personal choice, industry executives often promote this shift.

In fact, the majority of concerns about teens and sex are really about teenage girls. Historically, abstinence has been a female burden, with girls and women supposedly responsible for regulating male sexuality. The social control of women has been secured in recent history by policing female sexuality. Even within the confines of marriage, at the turn of the twentieth century femininity meant not taking too much pleasure from sex. Clinics provided treatment to women who suffered from such unnatural urges. Authorities viewed women who enjoyed sex as deviant and considered them dangerous.

Even when women’s desire ceased to be considered a medical problem, sexual gratification was defined as a socially undesirable quality, one that might reduce a woman’s chances for marriage. This was of course a serious threat in a time when women’s wages rarely enabled them to live independently. The need for male financial support, as well as the fear of unplanned pregnancy, socially and economically constrained women in the recent past.

The threat of rape has also historically been used to keep women from public spaces, supported by the practice of humiliating rape victims in court (and in the news media), which sometimes continues despite rape-shield laws. Women’s sexuality has been a double-edged sword: a woman’s worth has been tied to her appeal to men, yet rape has historically been blamed on women for being too appealing. The threat of sexual violence, even if not carried out, serves to limit women’s movement and freedom.

In recent decades the widespread availability of birth control and declines in the wage gap between men and women have created more personal freedom for women. But the old sexual double standards, that male sexuality is natural and female sexuality is a threat, are still alive in our fears about teens and sex. Concerns about teens’ sexual activity reflect shifts in the gender order: attempts to control teen sexuality tend to leave male sexuality out of the conversation. A New York Times article even suggested that earlier onset of puberty for girls might be due to overstimulation from sex in media.56 Interestingly, this hypothesis would not apply to boys, since their physical maturation has remained relatively stable over the past century. So why is female sexuality so frightening?

Teenage girls are considered a threat when they seek to become more than just sexual objects—when they act as sexual agents, we worry. American culture still promotes the idea that only girls hold the keys to chastity, but at the same time they are held up as the ideal form of female desire. We see this representation of teenage girls in many forms of popular culture, but it certainly does not originate there: its history lies in our tendency to value women who are young and sexually available for men. Rather than only blaming media culture for this representation of teenage


girls, we need to take a closer look at the nature of power, sex, and gender in contemporary American society. Underneath fears of teens having sex are concerns about the changing meaning of gender.

Girls are not the only objects of concern: historically, the sexual and reproductive practices of disempowered groups like immigrants, racial ethnic minorities, gays and lesbians, and the poor have been subject to greater surveillance and control. When groups are considered a danger to themselves or to others, restricting their freedom seems justifiable. We rationalize social control of young people based on the few who are held up as promiscuous bad examples, insisting that these teens prove most adolescents are incapable of making responsible decisions or are too easily influenced by media.

Historically, fears that the population is becoming less Protestant and less white have led to attempts to control the reproduction of immigrant and nonwhite groups. This has been accomplished by policies promoting sterilization for the poor and racial and ethnic minorities, removing girls from their families if juvenile courts believed they were likely to engage in sex, and, more recently, demonizing mothers of color.57 Due to this fear, during the early part of the twentieth century white women’s pregnancies were encouraged, and their access to birth control and abortion was restricted. The sexuality of groups perceived to be a threat is labeled dangerous and serves to legitimate public policies that restrict members’ behavior.

Many African American men were lynched by whites allegedly protecting white women’s virtue; black male sexuality came to be defined as a threat to the racial order. States enacted miscegenation laws for much the same reason—to prevent a union of a nonwhite man and a white woman—but they were certainly not enforced when slave owners fathered the children of black slave women. This double standard reveals how the dominant group maintains power by controlling the sexuality of others. Concerns about promiscuity, pregnancy, and disease have served as a way for powerful groups to assert control over those whom they feel threatened by, whether the threat is real or imagined.

Sex and Popular Culture

Societal shifts spurred by economic changes have altered American life, which has made it more difficult to monitor teens. Popular culture did not create these changes, nor is it the key to changing young people’s behavior, which is not reflective of the sexually laden media content we so often see. As with violence, we have seen a decline in teen sexual activity in recent years.

Sex on TV, in movies, and online understandably makes many parents uncomfortable and embarrassed. Representations of sex in media expose the reality that childhood does not and cannot exist in a separate sphere from adulthood.


Ironically, we use sex as the ultimate dividing line between childhood and adulthood, the line in the sand that adults try so hard to maintain and young people try so hard to cross. We define sex as a ticket to adulthood, so we should not be surprised when teens do, too. Sex in popular culture reminds us that we cannot sustain the lengthened version of childhood we have idealized since the mid- twentieth century. Popular culture is an easy target, providing a never-ending stream of disturbing images, but it is not the root cause of the changes in the attitudes and practice of sexuality in the twenty-first century.

As we have seen, changes throughout the twentieth century provided young people with the means to become more independent from their parents, rendering their behavior far harder to control. We often associate changes in sexual behavior with changes in media. Historical shifts are difficult to see and understand, while media are by nature visible and always trying to grab our attention.

That said, we should not ignore representations of sexuality in media. They provide useful clues about power and privilege and can launch greater exploration of contested meanings of both sexuality and gender.

Notes 1. Bella English, “The Disappearing Teen Years,” Boston Globe, March 12, 2005, C1. 2. Kathleen Kelleher, “Birds and Bees: Don’t Let TV Be Your Teenager’s Main Source

of Sex Education,” Los Angeles Times, April 30, 2001, E2; Brian Lowry, “Grappling with Teen Sex,” Los Angeles Times, February 20, 1999, A1; Kristina Lee, “MTV Show Promotes Teen Sex, Drug Use, Experts Say,” Fox 5 News (San Diego), January 11, 2011,,0,2357769.story; Marla Matzer, “Racy Content Rising on TV,” Daily News, February 10, 1999, N1.

3. Deborah M. Roffman, “Dangerous Games: A Sex Video Broke the Rules, but for Kids the Rules Have Changed,” Washington Post, April 15, 2001, B1.

4. Ibid. 5. See discussion in Chapter 4 for further details. See also Barrie Gunter and Jill L.

McAleer, Children and Television: The One-Eyed Monster? 6. Bill Albert, “With One Voice 2012: Highlights from a Survey of Teens and Adults

About Teen Pregnancy and Related Issues,” National Campaign to End Teen Pregnancy, 2012,

7. Lyn Gorman and David McLean, Media and Society in the Twentieth Century: A Historical Introduction, 36–40.

8. Motion Picture Production Code, 1930,

9. Henry Jenkins, “The Sensuous Child: Benjamin Spock and the Sexual Revolution,” in The Children’s Culture Reader, edited by Jenkins, 209.

10. Stephanie Coontz, Marriage, a History: From Obedience to Intimacy; or, How Love Conquered Marriage.

11. For further discussion, see Martha Wolfenstein, “Fun Morality: An Analysis of Recent American Child-Training Literature,” in The Children’s Culture Reader, edited by

Jenkins, 199. 12. Beth L. Bailey, From Front Porch to Back Seat: Courtship in Twentieth-Century

America. 13. Rickie Solinger, “Race and ‘Value’: Black and White Illegitimate Babies, 1945–

1965,” in Feminist Frontiers, edited by Laurel Richardson, Verta Taylor, and Nancy Whittier, 4th ed. (New York: McGraw-Hill, 1997), 282.

14. Jenkins, “Sensuous Child,” 225. 15. Anahad O’Connor, “Sending of Sexual Messages by Minors Isn’t as Prevalent as

Expected, Study Finds,” Pew Internet and American Life Project, December 3, 2011, Isnt-as-Prevalent-as-Expected.aspx.

16. Results of the National Campaign to Prevent Teen Pregnancy as reported by Lisa Mascaro, “Sex Survey: Teach Teens to Just Say No,” Daily News, April 25, 2001, N1.

17. Department of Health and Human Services, “Trends in the Prevalence of Sexual Behaviors,” National Youth Risk Behavior Survey, 1991–2011 (Washington, DC: Centers for Disease Control and Prevention, 2012),

18. Ibid. 19. Brady E. Hamilton and Stephanie J. Ventura, Birth Rates for U.S. Teenagers Reach

Historic Lows for All Age and Ethnic Groups (Hyattsville, MD: National Center for Health Statistics, April 2012),

20. Kathryn Kost and Stanley Henshaw, U.S. Teenage Pregnancies, Births, and Abortions, 2008: National Trends by Age, Race, and Ethnicity (New York: Guttmacher Institute, 2012),

21. Guttmacher Institute, Facts on American Teens’ Sexual and Reproductive Health (New York: Guttmacher Institute, 2012), ATSRH.html.

22. W. D. Mosher et al., “Sexual Behavior and Selected Health Measures: Men and Women 15–44 Years of Age, United States, 2002,” Advance Data from Vital and Health Statistics, 2005, no. 362,

23. Danice K. Eaton et al., “Percentage of High School Students Who Engaged in Sexual Behaviors, by Sex, Race/Ethnicity, and Grade, United States, 2011,” Youth Risk Behavior Surveillance Summaries (Atlanta: Centers for Disease Control and Prevention, 2012),; Amy E. Houlihan et al., “Sex and the Self: The Impact of Early Sexual Onset on the Self-Concept and Subsequent Risky Behavior of African American Adolescents,” Journal of Early Adolescence 28, no. 1 (2008): 70–91.

24. Guttmacher Institute, Facts on American Teens’ Sources of Information About Sex (New York: Guttmacher Institute, 2012), Ed.html#11a; Eaton, “Percentage of High School Students.”

25. National Center for Health Statistics, National Vital Statistics System, “Birth Rates for Females Ages 15–17 by Race and Hispanic Origin, 1980–2009,” in America’s Children: Key National Indicators of Well-Being, 2011 (Atlanta: Centers for Disease Control and Prevention, 2011),

26. J. A. Martin et al., “Adolescent Births: Birth Rates by Mother’s Age and Race and Hispanic Origin, 1980–2006,” Centers for Disease Control and Prevention, National Center for Health Statistics, National Vital Statistics System; J. A. Martin et al., “Births: Final Data

for 2005,” National Vital Statistics Reports 56, no. 6 (Hyattsville, MD: National Center for Health Statistics, 2007), popup=true.

27. Guttmacher Institute, Facts on Sex Education in the United States (New York: Guttmacher Institute, 2006),; Centers for Disease Control and Prevention, “Sexual and Reproductive Health of Persons Aged 10–24 Years—United States, 2002–2007,” in Morbidity and Mortality Report (Atlanta: Centers for Disease Control, 2009),

28. Guttmacher Institute, Facts on American Teens’ Sources of Information About Sex. See also L. D. Lindberg, “Changes in Formal Sex Education, 1995–2002,” Perspectives on Sexual and Reproductive Health 38 (2006): 182–189.

29. Maria C. Velez-Pastrana, Rafael A. Gonzalez-Rodriguez, and Adalisse Borges- Hernandez, “Family Functioning and Early Onset of Sexual Intercourse in Latino Adolescents,” Adolescence 40 (2005): 777–791; Emily Rosenbaum and Denise B. Kandel, “Early Onset of Adolescent Sexual Behavior and Drug Involvement,” Journal of Marriage and the Family 52, no. 3 (1990): 783–798.

30. Mike A. Males, The Scapegoat Generation: America’s War on Adolescents, 214– 215.

31. In Framing Youth: Ten Myths About the Next Generation, 182–188, Males discusses the connections between poverty and early pregnancy. He argues that underlying fears of teenage pregnancy is fear of young people of color and that focusing only on pregnancy enables us to avoid talking about race and class. He concludes it is easier to demonize teen mothers and popular culture than to understand why teen pregnancy is so much more likely among the poor. The middle-class privileges many Americans take for granted often do not apply to this disadvantaged group, who are less likely to benefit from public education and whose economic prospects, even without children, are rather grim. In sum, Males argues that the teens most at risk of becoming pregnant are the same ones we demonize as we refuse to acknowledge the economic and social challenges they face prior to becoming parents.

32. Males, Scapegoat Generation, 47–48, 52. 33. Mike A. Males, Teenage Sex and Pregnancy: Modern Myths, Unsexy Realities, 21,

30. 34. G. Martinez, C. E. Copen, and J. C. Abma, “Teenagers in the United States: Sexual

Activity, Contraceptive Use, and Childbearing, 2006–2010,” National Center for Health Statistics 23, no. 31 (Hyattsville, MD: National Survey of Family Growth, 2011),

35. Debra Boyer and David Fine, “Sexual Abuse as a Factor in Adolescent Pregnancy and Child Maltreatment,” Family Planning Perspectives 24 (1992): 4–19.

36. J. D. Brown and S. F. Newcomer, “Television Viewing and Adolescents’ Sexual Behavior,” Journal of Homosexuality 21 (1991): 77–91; J. Bryant and S. C. Rockwell, “Effects of Massive Exposure to Sexually Oriented Prime-Time Television Programming on Adolescents’ Moral Judgment,” in Media, Children, and the Family: Social Scientific, Psychodynamic, and Clinical Perspectives, edited by D. Zillman, J. Bryant, and A. C. Huston (Hilldsale, NJ: Lawrence Erlbaum, 1994), 183–195; Rebecca L. Collins et al., “Watching Sex on Television Predicts Adolescent Initiation of Sexual Behavior,” Pediatrics 114 (2004): e280–e289.

37. Collins et al., “Watching Sex on Television.” 38. D. Kunkel et al., “Sex on TV 4” (Washington, DC: Henry J. Kaiser Foundation,

2005), This study is the foundation’s fourth study of sex on television, which analyzed 1,154 programs from the 2004–2005 season. Authors sought to address whether the frequency of what they defined as sexual messages were increasing, how sexual messages are presented, and whether the risks and responsibilities of sex are portrayed.

39. Accessible online at Findings.pdf.

40. Kaiser Family Foundation, “Sex on TV 4,” 42. 41. For discussion about how audiences create varying meanings from texts and are not

simply manipulated by messages, see David Morley, Television, Audiences, and Cultural Studies; John Fiske, Understanding Popular Culture (London: Routledge, 1989); and Ien Ang, Living Room Wars: Rethinking Audiences for a Postmodern World.

42. Kaiser Family Foundation, “Sex on TV 4,” 1. 43. Ibid., 2; John E. Anderson, “Condom Use and HIV Risk Among U.S. Adults,”

American Journal of Public Health 93 (2003): 912–914,

44. For more discussion, see Males, Framing Youth: Ten Myths About the Next Generation, chap. 6.

45. Peter Kelley, David Buckingham, and Hannah Davies, “Talking Dirty: Children, Sexual Knowledge, and Television,” 221–242.

46. Ibid., 238. 47. Patricia A. Adler and Peter Adler, Peer Power: Preadolescent Culture and Identity. 48. Barrie Thorne, Gender Play: Girls and Boys in School. 49. Donna Eder, Catherine Colleen Evans, and Stephen Parker, School Talk: Gender

and Adolescent Culture, 83–102. 50. Karen Sternheimer, “A Media Literate Generation? Adolescents as Active, Critical

Viewers: A Cultural Studies Approach. 51. See Michel Foucault, The History of Sexuality, vol. 1, An Introduction (New York:

Vintage, 1980). 52. Susan J. Douglas, The Rise of Enlightened Sexism: How Pop Culture Took Us from

Girl Power to Girls Gone Wild, 9. 53. Sexuality Information and Education Council of the United States, “Public Policy

Fact Sheet” (Washington, DC: SIECUS, October 2007),; Hannah Brückner and Peter Bearman, “After the Promise: The Consequences of Adolescent Virginity Pledges,” Journal of Adolescent Health 36 (2005): 271–278.

54. Valerie Walkerdine, “Popular Culture and the Eroticization of Little Girls,” in The Children’s Culture Reader, edited by Jenkins, 257; Jenny Kitzinger, “Who Are You Kidding? Children, Power, and the Struggle Against Sexual Abuse,” in Constructing and Reconstructing Childhood: Contemporary Issues in the Sociological Study of Childhood, edited by Allison James and Alan Prout, 165–189.

55. For further discussion, see James R. Kincaid’s provocative book, Child-Loving: The Erotic Child in Victorian Literature.

56. Lisa Belkin, “The Making of an Eight-Year-Old Woman,” New York Times Magazine, December 24, 2001, 38.

57. For a discussion of this practice in the beginning of the twentieth century, see Steven Schlossman and Stephanie Wallach, “The Crime of Precocious Sexuality,” in Juvenile Delinquency: Historical, Theoretical, and Societal Reaction to Youth, edited by Paul M. Sharp and Barry W. Hancock, 2nd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1998), 41–62. Immigrant girls were often considered delinquent if juvenile courts believed they were likely to engage in sex—no proof of actual behavior was necessary.




Changing Families As Seen on TV?

In June 2009, MTV debuted 16 and Pregnant, an unscripted show featuring pregnant teenagers. Later that year, MTV began airing a spin-off of the show, Teen Mom, sparking controversy that teen girls would try to get pregnant in order to be on television.

Cable news helped gin up this controversy, holding segments with debates about whether the shows would encourage or discourage teens and whether social services should intervene and remove the children from these homes.1 (A 2012 survey found that 77 percent of teens thought the show presented the challenges of pregnancy and parenthood rather than inspiration to get pregnant.)2 The conflicts seen on Teen Mom also provided fodder for tabloids, which focused on the relationship dramas of its participants.

The Teen Mom debates happened on the heels of other high-profile controversies. When 2008 vice presidential candidate Sarah Palin revealed that her teenage daughter Bristol was pregnant, it became a touchstone of debate, furthered when Bristol became a spokesperson for abstinence. Her pregnancy followed another scandal that took place during the 2007–2008 school year, when seventeen girls from a high school in Gloucester, Massachusetts, became pregnant. Their story became national news when the school’s principal told Time that many of the girls wanted to get pregnant, and in fact created a pact to do so.3 As of this writing, there has been no confirmation from any of the girls involved that they actually made any sort of prepregnancy pact—one pregnant teen from the school denied that such a pact existed—yet the story sparked a national debate about what caused these teens to get pregnant.

There is a simple answer, the one you probably learned about in health class. But much of the debate focused on celebrities and representations in popular culture. Juno, the 2007 dark comedy about a flippant teen who becomes pregnant and decides to give her baby up for adoption, was a prime target of focus on Good Morning America, The Today Show, and other outlets across the media spectrum.4 Did Juno make pregnancy cool? And what about Jamie Lynn Spears, the star of the tween Nickelodeon show Zoey 101, who became pregnant at sixteen? Was coverage of her pregnancy and subsequent delivery an inspiration to her young fans?

Regardless of what causes these and other teens to get pregnant, popular culture is capitalizing on the issue. An ABC Family program, The Secret Life of the


American Teenager, about a wholesome fifteen-year-old girl-next-door type who gets pregnant could have been called “The Secret Life of an American Teenager,” but by generalizing to “the” American teenager, the show implies that this teen, in fact, could be anyone.5

Teen pregnancy and single parenthood are important issues in the United States; American teens are more likely to become pregnant than their counterparts in other industrialized nations.6 Children are now much more likely to live in single-parent families in the United States compared with past generations. These changes are significant, but what has caused them?

What about celebrities who seem to glamorize single parenthood or having children outside of marriage? Do television shows featuring kids that mouth off to their parents mock the importance of families? Can movies make divorce seem like a liberating experience? While all of these issues help us raise questions about the state of relationships and families today, there are other factors that are not as visible (or, frankly, as fun to talk about) that are behind the major shifts in families. Economic changes are the most central factor in the rise in single-parent families and are the best predictors of teen pregnancy. In this chapter we will explore both the cultural and the structural changes that have created major shifts in how families form in the United States today, looking beyond popular culture.

Did Television Families Change Real Families?

During the decade of the 1990s, “family values” became a potent political mantra. The phrase became a shortcut for particular political positions, indicating support for traditional gender roles, where women’s main focus is on mothering, men’s on earning, and marriage is stable, the fulcrum upon which family life rests (for heterosexuals, at least). The family-values platform was also used as a label for issues dear to many evangelical Christians, including challenges to abortion and gay and lesbian rights. Challenging an opponent’s family values became a powerful weapon to cast someone as antifamily, a charge that could derail a candidate who didn’t seem to at least pay lip service to family values. Criticizing popular culture is an easy way to do just that, and television and movies became a target for those from across the political spectrum.

Some of the first shots were lobbed at The Simpsons, the cartoon family that debuted on Fox, first as part of The Tracey Ullman Show in 1987, and then on its own in 1989. It has since become a widely celebrated series, but when the show first became popular critics complained that the show was a blatant assault on families. During a 1992 speech, former President George H. W. Bush stated, “We need a nation closer to the Waltons than the Simpsons.”7 (The Waltons aired from 1971 to 1981, featuring a family in rural Virginia during the Depression and World


War II, supposedly representing a “simpler time.”) Parents and educators expressed their disapproval of Bart, fearing that children

would revere him as a role model. A school in South Carolina was to be named “Springfield Elementary” until someone discovered that Springfield was the name of Bart Simpson’s school. At that time, the Simpson family was considered the prime example of everything wrong with American families at the century’s end. The kids talked back to their parents and teachers and challenged authority. Critic Frank McConnell noted that The Simpsons “deconstruct[s] the myth of the happy family.”8 If we take a closer look at the Simpsons, we see a two-parent family where the mother stays home with the kids while the father works in the paid labor force. Isn’t this the family-values proponents’ dream?

Not exactly. The Simpsons contains elements of family life usually left hidden from the public eye. Father Homer is often portrayed as an overweight slob who rejoices in his own laziness. We assume that the parents “parent” the children and impart their values and wisdom. However, The Simpsons exposes the reality that sometimes children hold wisdom that their parents do not. Lisa typically knows more than both of her parents, and Bart frequently outwits them, revealing that moms and dads aren’t always in charge. It is clear why parents would be uneasy with this instability, which exposes how tenuous parental power over children really is.

This and other television families served as powerful critiques of family life in sharp contrast to television families of midcentury like Leave It to Beaver, Father Knows Best, and the former president’s example, The Waltons, all of which idealize two-parent families. While these families likely bore little resemblance to most people’s experiences, they seemed to provide a cultural reference that maintained the illusion of the ideal family, something that contemporary television families did not. In any case, it is very interesting that mere representations of families have the power to elicit public outcry, presuming that the representations themselves are behind challenges real families face.

Were blended families more harmonious because The Brady Bunch seemed to forget that they were a stepfamily? Could interracial adoptive families solve their problems in twenty-two minutes like the Drummonds of Diff ’rent Strokes (which starred the late Gary Coleman from 1978 to 1985)? Probably not, but these shows and others like them reinforced traditional paternal authority, something that became harder to do as divorce rates rose. In 1970 3.2 percent of the population over eighteen was divorced; this number nearly doubled to 6.2 in 1980 and continued its rise to just under 9 percent in 1990.9 The divorce rate began its steep ascent around 1960, tripling by 1980, although after 1979 the actual rate of divorce began to decline modestly; it leveled off in the mid-1990s and has been stable since that time.10


Popular culture was a bit late on the scene; divorce was rarely portrayed in movies until the late 1970s with films like An Unmarried Woman (1978) and Kramer vs. Kramer (1979). On television, Maude (1972–1978) featured a divorced (and extremely controversial) Bea Arthur, yet this show was about a woman in middle age, with an adult daughter only a marginal character on the show. One Day at a Time (1975–1984) featured a divorced mother of two daughters. Rather than preceding or encouraging the divorce trend, these shows appeared closer to the peak of divorce rates, which subsequently leveled off.

During this time, between 1975 and 1995 particularly, birthrates to unmarried women rose significantly. In 1970 about one in ten births involved an unmarried mother. By the late 1990s the number rose to nearly one in three. When unmarried television character Murphy Brown had a baby in a 1992 episode, she became the target of political ire. Former vice president Dan Quayle accused Murphy of “mocking the importance of fathers” by having a baby without being married, which he argued contributed to the causes of the civil unrest in Los Angeles following the acquittal of the police officers accused of beating Rodney King. Quayle noted that the fact that the character was an “intelligent, highly paid professional woman” only made matters worse, since it appeared to be a “life-style choice.”11

Figure 7.1: Divorces and Divorce Rates, 1940–1997 Source: US Census Bureau, 1998


Figure 7.2: Divorce Rate per 1,000, 1960–2008 Source: US Census Bureau, 2011

I suspect that Quayle was not a regular viewer of the sitcom about a clever yet complicated news anchor. If he was, he would know that the father of Murphy’s baby—her ex-husband—left her upon learning of the pregnancy, which was unplanned. The choice she made to go through with the pregnancy rather than have an abortion might have been interpreted positively by those in the family-values camp. Instead, she embodied family-values proponents’ anger, not only because of her single parenthood, but likely because she symbolized the successful career woman. In response, the show began the next season with Murphy suffering from sleep deprivation and struggling to cope with a newborn, hardly glamorizing single motherhood.

Beyond the debates over families remains an interesting question: why do representations of families in popular culture strike such a nerve? Whether it be the sitcom characters of the past or the celebrities we gossip about today, pop culture figures might not always be real, but they provide real touchstones that we can point to and all know about. Unlike individual families, which are by nature private and typically not subject to public scrutiny, fictional families and celebrity gossip are open targets. They might symbolize and, yes, sometimes help normalize issues currently undergoing societal shifts. Yet they are not the sources of change, but typically representations of larger social forces at work.

Trends in Teen and Unmarried Births: No Teen Mom Effect

Since the Murphy Brown controversy, we have witnessed two significant changes: many more births to unmarried women and far fewer births to teenagers. In 1991 the birthrate for teens aged fifteen to nineteen was just under 62 per 1,000. By 2010


this rate was practically halved, to about 34 births per 1,000.12 For the youngest teens aged ten to fourteen, the birthrate fell more than 50 percent, from 1.4 births per 1,000 to 0.5 births per 1,000. The majority of teen births are to eighteen- and nineteen-year-olds: 66 per 1,000 in 2009 (down 30 percent from 1991).13 Compare these rates to the 1950s, when births to teens fifteen to nineteen peaked near 100 per 1,000—in other words about 10 percent of women had a baby by the time they were twenty.14 By contrast, in 2010 less than 4 percent of women had their first child in their teen years.

There are also notable disparities by race and ethnicity. In 2010 African American teens aged fifteen to nineteen were more than twice as likely to give birth as their white counterparts and more than five times as likely than Asian American teens (51.5 per 1,000 compared with 23.5 per 1,000 and 11 per 1,000, respectively). Native Americans of the same age were nearly four times more likely as Asian Americans to have a baby (38.7 per 1,000). Latinas are the most likely to have a baby during this time, with rates at 55.7 per 1,000.15 Despite these differences, it is important to note that much of the decline in overall teen birthrates is due to a drastic drop in births to African American, Native American, and Asian American teens, whose birthrates dropped by more than half between 1991 and 2010.16 If popular culture was a major factor, we would not predict such stark disparities in teen birthrates. Clearly, there is a lot more than popular culture that explains why teens get pregnant.

In the United States, race and ethnicity are highly correlated with poverty, one of the best predictors of teen pregnancy. In 2010 New Hampshire had the lowest teen birthrate, at 15.7 births per 1,000 teens, while Mississippi had the highest, at 55.0 per 1,000. Not surprisingly, New Hampshire also boasted the highest median household income in 2010, while Mississippi had the lowest in the nation ($66,303 versus $36,850).17

When the perceived loss of opportunity a baby brings is lowest, the chances of a pregnancy and birth increase.18 For those who do not see a clear-cut path to college, for instance, or a lucrative profession that a baby might potentially disrupt, pregnancy becomes more likely. For many low-income teens in this category, becoming a mother is a rite of passage into adulthood, a way of establishing a sense of status when few other pathways appear available. Yes, this often continues the cycle of poverty, both in the United States and globally. This could help explain why the United States has higher teen pregnancy rates than other industrialized nations do, since we also have significantly higher poverty rates here that are associated with race.


Figure 7.3: Birth Rate for Teenagers 15–19 Years and Percent of Teenage Births to Unmarried Teenagers, 1950–2000 Source: S. J. Ventura, T. J. Mathews, B. E. Hamilton, National Vital Statistics Reports, 49, no. 10 (Hyattsville, MD: National Center for Health Statistics, 2001)

Yet debates about 16 and Pregnant and Teen Mom help us think that teen motherhood is a relatively new problem, rather than one in decline. Teen pregnancies are not new, as noted in the figure, but the major change is in the rise of unmarried teen mothers.

In the 1980s we saw a turning point, when the majority of teens who had given birth did not get married first. Perhaps it is not an accident that this is the time when teen pregnancy became viewed as a major social problem. Yes, teen birthrates did rise after years of decline or stability. But what really changed was the view of marriage itself, which gradually has been separating from parenthood.

Pregnant teens are more visible now than in midcentury, when girls were often sent away or got married; either way they were likely to drop out of school, whereas now school districts have made efforts to encourage graduation. Declines in teen marriage rates are not necessarily a bad thing: these marriages are among the most likely to end in divorce. According to a 2002 CDC report, those who marry before the age of twenty are significantly more likely to divorce than couples who are older: 48 percent of marriages with a partner under eighteen end in divorce within ten years, compared with 29 percent when the couple is over twenty-five at the time of marriage. The report also concludes that marriages that come after or soon before the birth of the couple’s first child are more likely to end. A smaller study published in 2012 echoed these findings.19


In any case, marriage no longer ensures economic survival, particularly a marriage to a man with less education in a low-income occupation. The CDC report found that marriages in low-income communities are more likely to end: 44 percent compared with just 33 percent in middle-income communities and 23 percent in high-income areas. This percentage is especially high for African American couples in low-income areas, where 56 percent of marriages end in divorce.20

This instability could also explain why more women are having children outside of marriage. In 2009 41 percent of all births in the United States were to unmarried women, an increase from 28 percent in 1990 and 18 percent in 1980. In 2009 unmarried birthrates were highest for women in their twenties; as noted above, birthrates for teens declined between 1990 and 2009. Birthrates for unmarried women in their thirties also increased, more than doubling since 1980. Births to unmarried women in their forties also increased from 12 percent in 1980 to 21 percent in 2009, although birthrates for this age group are lower than any other.21

Not all unmarried parents are unpartnered, though. In 2010 the US Census started counting same-sex couples, who previously would have been categorized as unmarried, and estimate that just over 5 percent of American households are composed of cohabiting opposite-sex couples and 0.5 percent same-sex couples.22 Cohabiting couples with college degrees are less likely to have children than cohabiting couples without degrees—33 percent compared with 67 percent— having a dramatic effect on household income. Cohabiting college graduates’ median earnings in 2009 were just over $106,000, compared with just over $46,000 for other cohabiters.23

And in contrast to single-parent celebrities, the majority of unmarried mothers are low earners, with less education than their married counterparts.24 Not only are lower-income people less likely to get and stay married, but women’s wages still lag behind men’s.25 According to the US Census, in 2009 women’s annual earnings were just 65 percent of men’s.26 Families headed by single mothers are far more likely to live in poverty than any other family structure; in 2010 34 percent of all such families lived below the poverty line, compared with 17 percent of father- headed households and just under 8 percent of married-couple households.27 These numbers are even higher for single women of color with children under five.

Unlike claims that celebrities like Angelina Jolie and Brad Pitt who famously had children without being married influence others to have children outside of marriage, race and ethnicity are better predictors of unmarried births than celebrity influence. Seventy-two percent of African American women, 53 percent of Latinas, and 68 percent of American Indian/Alaska Natives women were unmarried when they gave birth in 2009. By contrast, 36 percent of white women and just 17 percent


Asian/Pacific Islander women were unmarried.28 So while we might hear of professional women with healthy incomes who

decide to go it alone, most unmarried mothers’ lives are far less glamorous. Yet some, like movie critic Michael Medved, charge that celebrities who have children outside of marriage make illegitimacy “chic.”29 It’s tempting to blame celebrities, whose lives are part of a never-ending soap opera of the celebrity gossip machine that many people follow. While the rich and famous might adopt a more bohemian lifestyle than the average person, the factors that lead to teen pregnancy and single parenthood are much more associated with race, ethnicity, and socioeconomic status.

What Changed American Families?

While popular culture might help us get used to the idea of single-parent families or having children outside of marriage, there are other major factors behind the shifts we have witnessed in the past half century. The causes of large-scale changes to families are complex, and individual families are even more complicated.

In this section I will explore four key issues that have contributed to the increase in single-parent families and the rise in divorce rates since the mid-twentieth century. First, economic changes have created the need for most adults to work in the paid labor force and have affected families in other ways that I will discuss. Second, legal changes have altered marriage, divorce, and single parenthood. Third, these shifts have led to new expectations for marriage itself, leading people to marry later and sometimes walk away from marriages. Finally, I will conclude by looking at how all of these served to create new cultural meanings and norms for families and marriage. This is where popular culture comes back into the picture.

Money Matters We have all heard that arguments over money are a major contributor to divorce. Couples with financial problems experience more stress and interpersonal friction. But macroeconomic changes are important to consider as well. One of the most significant shifts has been the rise and fall of the so-called family wage in the twentieth century. Starting in the 1920s and accelerating after the end of World War II in 1945, real wages rose, meaning that a single (male) earner could typically support a family and an increasingly higher standard of living. This made what we often think of as the traditional family possible and meant that for many during this time, marriage afforded substantial economic benefits to women. By contrast, being unmarried would create serious financial hardships for women, particularly if they


had children. Few occupations were open to women, and those that were rarely paid enough to support a family.

Bear in mind, the low divorce rates of midcentury did not necessarily mean marriages were always happy. For some women who had worked in the labor force during World War II, the return to domesticity was not necessarily their choice, and divorce rates spiked during the war years of the 1940s. The growth of the suburbs meant that fewer extended families lived together and shared child rearing and household tasks, placing more pressure on mothers who sometimes felt isolated in new tract homes, as Betty Friedan detailed in her 1963 book The Feminine Mystique.

Yet the main reason many women entered the paid labor force in the last decades of the twentieth century was not personal fulfillment, but necessity. Declines in men’s real wages, or the purchasing power of men’s earnings, meant that many more families required a second earner. Low-income women had always worked in the labor force, but gradually the need for dual incomes trickled up class lines. This middle-class squeeze, as we have come to call it, put pressure on families in two important ways.

First, it introduced new time pressures on families and challenged the traditional gender order by shifting family responsibilities. Second, as women gained economically, they had choices that their mothers likely did not. With the passage of the Equal Credit Opportunity Act, signed into law in 1974, unmarried women were able to get credit cards in their own name. This, coupled with the gradual decline in the wage gap between men and women, meant that women were no longer as economically beholden to marriage as they had been in the past. Many women no longer had to choose between staying in an unhappy marriage and becoming homeless.

Critics blamed feminism for undermining marriage, but in truth the need for women to enter the labor force in large numbers had more to do with other economic factors of the 1970s: skyrocketing inflation, led in part by high energy costs; the beginning of deindustrialization, which bled manufacturing jobs from many cities; and the decline in unions and the family wage and benefits that accompany unionized jobs. Yes, for some women getting a job meant personal fulfillment, but for the majority it was not simply a matter of choice.

Legal Changes The divorce rate dipped after World War II and remained relatively flat until the mid-1960s, when there was a big jump. Not only did the economic circumstances create changes within families, but laws changed as well.

One such change came with the 1965 Supreme Court decision Griswold v.


Connecticut, which overturned a state law banning the use of contraception and effectively ended the Comstock laws, originally passed in 1873, which made distributing information about birth control illegal and classified as obscene. This decision made birth control more widely available and helped weaken the chain linking sex and marriage, a change that some decry and others celebrate.

Just three years later, in 1968, another significant decision, Levy v. Louisiana, gave so-called illegitimate children the same rights as children born to married parents. Prior to this decision, children born outside of marriage had few inheritance rights, and in this case the children born to Louise Levy were initially denied the right to sue a doctor for malpractice after their mother died, since they were not considered legitimate. Following this decision, the term illegitimacy no longer had legal bearing on family law, and eventually the word itself fell into disfavor.

Perhaps the biggest legal change affecting divorce rates came in 1970, with the introduction of the first no-fault divorce law in California. This meant that spouses no longer had to prove to a judge that they deserved a divorce, and it became easier if one party wanted out—with or without their spouse’s agreement. Keep in mind this didn’t necessarily cause people to get divorced; rates had been on the rise already. Instead, this served to streamline the process and reduce the need to spend significant time in the already burdened family courts.

Still, critics insist that this made divorce too easy, although most people who go through a divorce will likely disagree that it is ever easy, even if the legal process has been simplified. Calls for “covenant marriage” in many states seek to allow couples to choose a form of marriage that would essentially opt them out of no-fault divorce. Ironically, these laws are often promoted by evangelical Christians, who actually have higher divorce rates than nonevangelicals.30 To limit divorce, public policies are better served by addressing the factors that contribute to a marriage’s dissolution, like economic instability, rather than how it is done legally.

The Meaning of Marriage As marriage no longer guarantees financial stability and courts have made it easier to dissolve unhappy marriages, the meaning of marriage itself has changed. In the course of a century, women gained property inheritance rights and the ability to maintain custody of children. Few divorces occurred prior to the twentieth century, largely because women nearly always lost any parental rights and had no claims on any property or wealth from the marriage. As legal changes—particularly voting rights—made women full citizens, eventually with all the same rights as men, they were no longer as dependent on men for economic survival.

Marriage evolved to become less of an economic arrangement and more about


personal fulfillment, although certainly a financial factor remains. As historian Stephanie Coontz writes in Marriage, a History, nineteenth-century observers were very concerned that marriages based on love alone would become unstable. They were correct, although from our twenty-first-century vantage point, marriage based on partnership and emotional fulfillment seems superior to marriage based on maintaining family reputation, consolidating power, or the transfer of property.

But as Coontz notes, “love conquered marriage.” As marriage became linked with romance at the end of the nineteenth and into the twentieth centuries, it became more unstable and overloaded with expectation. Instead of roles of servitude, clear-cut and unambiguous, changes in the economic structure have made marital roles more equal. I and many others would agree that this is a change for the better. By today’s standards, a marriage based on having no place else to go is not much of a marriage at all.

For others, they see the effect and overlook the cause of change. Movements to reinstate patriarchal marriage that encourage women to view their husbands as the leader of the family might work in some families, but we no longer have an economic structure that supports female dependence in the manner it did in previous generations on a large scale. Not only is it impossible to roll back the clock to the preindustrial age when marriage was a vital economic arrangement, but as women entered the labor force in larger numbers, the United States experienced tremendous economic growth. If somehow families could afford to live on one income (and not rely on children’s labor, as many families did before the 1930s), the country’s productivity would decline and the economy would shrink.

Women’s participation in the paid labor force is necessary on both personal and societal levels, and many women find that their contributions in the workforce are a source of fulfillment. At the same time, money has not completely separated itself from marriage. A 2006 Gallup Poll found that although married people tended to be happier than unmarried respondents, 67 percent of those with higher incomes were happy, compared with 56 percent of those married and in the lowest income group. A 2010 Pew Research Center study found that those with the highest incomes were also the most satisfied with their family life; those with higher levels of education were also the most likely to be married, maximizing family-income potential.31 Perhaps our expectations for marriage do not live up to the reality: according to the General Social Survey, slightly fewer married people report being very happily married today than in the 1970s, when divorce rates were skyrocketing. In 1972 67 percent reported being very happy in their marriage, and that percentage remained in the upper sixties for most of the 1970s. In 2006, the most recent year data are available, just under 61 percent reported that their marriage was very happy, a percentage relatively representative of the responses in the 2000s.

By contrast, the percentage reporting that they were “not too happy” in their marriage has remained relatively stable and low: just under 3 percent in 1972 and


about 2 percent in 2006. This could also reflect the changing expectations people have about marriage being a source of total emotional fulfillment. But a 2007 British poll found that nearly a quarter of respondents regretted marrying their spouse, and 15 percent had reservations about getting married at the time of their wedding.32 The survey also notes that some respondents admitted to getting married for the party and social aspects of the wedding itself.

Just as the meaning of marriage has changed, weddings have become a multibillion-dollar industry. Ironically, the emphasis on elaborate weddings began as marriage itself became less stable. From relatively simple affairs for most mid- twentieth-century couples to big-budget extravaganzas for many now, the wedding itself has taken on more importance. As Chrys Ingraham writes in her book White Weddings: Romancing Heterosexuality in Popular Culture, the “wedding- industrial complex” encourages couples to focus on throwing the most elaborate party, spending an average of nearly twenty-eight thousand dollars per affair.33

From the gown to the invitations, favors, food, musicians, flowers, attendants, and the honeymoon, wedding planners focus a tremendous amount of attention on consuming the fantasy of marriage at the very time when the real meaning of marriage is unclear and in flux. This practice has become a part of the romantic idea of marriage and is embedded in popular culture, from movies and television shows to books, magazines, and an endless number of blogs. Coverage of over-the- top celebrity weddings are a staple in magazines and are regularly televised on reality programs.

Even as women have other means to carve out a sense of identity, the wedding remains predominant—brides-to-be often describe their wedding as a chance to feel like a princess for a day. Of course, in America, princesses exist only in fairy tales, but marriages exist in the real world. When fantasy meets reality, disappointment is likely. What impact does popular culture have on this misperception of marriage?

Happily Ever After? While the happily-ever-after fantasy is clearly a mainstay in celebrity gossip, which fawns over weddings and babies, and in romantic comedies where love conquers all, an equally powerful part of our culture broods over marriage’s demise. Critics like Michael Medved argue that movies and television often demean the importance of marriage, making divorce appear liberating and families constraining.34 He and others look to the real lives of movie stars, particularly those who have children outside of marriage, and fear that they are setting a bad example for the rest of us. Are they negatively influencing Americans’ values?

Cohabitation has become more common in American society—US Census


Bureau estimates suggest that just over 1.5 million heterosexual couples cohabited in 1980, rising to about 7.52 million couples by 2010.35 Rather than simply a cultural shift, this trend in part reflects the economic changes discussed above, where marriage offers less financial stability now than in the past. Those most likely to cohabitate are those among the least likely to derive financial benefits through marriage.

In a 2005 poll of young adults aged eighteen to twenty-four, 59 percent agreed that it was acceptable for a couple to live together without being married, yet 57 percent agreed that people who want children should be married. Although this might seem like young people aren’t as committed to marriage, 67 percent of those polled agreed that marriages should end only in “extreme circumstances” and that it should be a lifetime commitment. A 2006 Washington Post/Kaiser Family Foundation/Harvard University poll of American adults of all ages found that 76 percent of adults believed that marriage is very or somewhat important. In a 2012 Rasmussen Poll, 78 percent rated marriage as important for American society.36

And most people do eventually marry—even if the union does not last. Approximately 96 percent of Americans have married at least once by the age of sixty-five. According to the US Census Bureau, in 2011 half of Americans over fifteen were married, and 10 percent were divorced.37

Although much has been made recently of the declining proportion of married women, census data indicate that in 2011 the percentage of never-married women over fifteen was lower than or similar to the first four decades of the twentieth century. Between 1900 and 1940, the percentage of women who had not married ranged from 31 to 26 percent, compared with 28 percent in 2011.

Whereas the percentage of widowed women has remained virtually the same since 1900, the percentage of divorced women increased from 0.5 to 11 percent. Compared with the turn of the last century, more women had gotten married—and subsequently divorced—than their great-grandmothers’ generation.38 Marriage still matters, and most people intend to get married. The importance of marriage can be underscored by the gays and lesbians striving to be included in this still-hallowed institution.

Just because people value marriage doesn’t mean it’s easy to maintain. In a Pew Research Center Poll, 66 percent of respondents felt that it was hard to have a good marriage (by contrast, 21 percent said it was easy, and 9 percent said “probably impossible”).39 It is important to keep in mind that the economic pressures that make marriage more challenging, rather than the lack of values, are likely behind the breakup of many marriages. The stressors associated with economic difficulties place tremendous strain on families, stressors that often override the best intentions of wedding vows.

Popular culture is contradictory on this matter: it both celebrates marriage and


increasingly supports alternatives. Although it may seem as though it is leading these trends, pop culture is instead serving as an echo chamber, both reflecting back and amplifying the changing nature of personal relationships and families. Yes, celebrity gossip and entertainment no longer demonize single mothers to the extent that they might have in the past. The morality codes enforced within the film industry in Hollywood’s early days have gone the way of the double feature, and television shows certainly are not as genteel as they were in the days of Ricky and Lucy Ricardo’s separate beds and being “in a family way.”

But it is a mistake to overlook the overarching changes that have altered families —probably permanently. Popular culture may make these changes most visible and sometimes draw criticism when we take notice. Understanding the rise in single parenthood, divorce, and changes in marriage requires more than understanding changes in the media, which often leave out the less-than-entertaining details about social structure, most notably racial and economic inequality, which are good predictors of teen pregnancy and single parenthood.

Notes 1. See, for example, Melissa Henson, “MTV’s Teen Mom Glamorizes Getting

Pregnant,” CNN, May 4, 2011, 04/opinion/

2. Bill Albert, “With One Voice 2012: Highlights from a Survey of Teens and Adults About Teen Pregnancy and Related Issues,” National Campaign to End Teen Pregnancy, 2012,

3. Kathleen Kingsbury, “Pregnancy Boom at Gloucester High,” Time, June 18, 2008,,8599,1815845,00.html.

4. Ann-Marie Dorning, “Teen Baby Boom in One Mass. High School,” Good Morning America, June 20, 2008,; “Teen ‘Pregnancy Pact’ Has 17 Girls Expecting,” News Services, June 20, 2008,

5. ABC Family home page for The Secret Life of the American Teenager, http://abc- Teenager/page_Season-1-Episode-1.

6. Guttmacher Institute, “Teenagers Sexual and Reproductive Health” (Washington, DC: Guttmacher Institute, 2002). Accessible online at

7. “Bush Barks Up Wrong Tree When He Slams Simpsons,” TV Guide, May 23, 1992, 31.

8. “A Rascal Cartoon Character Sets Off Controversy in S.C.,” Los Angeles Times, March 1, 1994, A5; Frank McConnell, “’Real’ Cartoon Characters: The Simpsons,” Commonweal, June 15, 1990, 389.

9. US Bureau of the Census, Table 58, Marital Status of the Population, by Sex, Race, and Hispanic Origin: 1970 to 1994 (Washington, DC: Government Printing Office, various years).

10. US Department of Health and Human Services, Underlying Population Trends,

“Divorces and Divorce Rates, 1940–1997” (Washington, DC: Government Printing Office, 2000),; Rose M. Kreider and Renee Ellis, “Number, Timing, and Duration of Marriage and Divorces, 2009,” Current Population Reports (Washington, DC: US Census Bureau, 2011),

11. US Department of Health and Human Services, Underlying Population Trends, “Nonmarital Births, 1970–1998” (Washington, DC: Government Printing Office, 2000),; editorial, “Dan Quayle vs. Murphy Brown,” Time, June 1, 1992,,9171,975627,00.html.

12. Brady E. Hamilton and Stephanie J. Ventura, “Birth Rates for U.S. Teenagers Reach Historic Lows for All Age and Ethnic Groups,” National Center for Health Statistics Data Brief, no. 89 (Hyattsville, MD: National Center for Health Statistics, 2012),

13. Joyce A. Martin et al., “Births: Final Data for 2009,” National Vital Statistics Reports 60, no. 1 (Hyattsville, MD: National Center for Health Statistics, 2011), 4,

14. Stephanie J. Ventura, T. J. Mathews, and Brady E. Hamilton, “Births to Teenagers in the United States, 1940–2000,” National Vital Statistics Reports 49, no. 10 (Hyattsville, MD: National Center for Health Statistics, 2001), fig. 2,

15. Hamilton and Ventura, “Birth Rates for U.S. Teenagers Reach Historic Lows for All Age and Ethnic Groups.”

16. Ventura, Mathews, and Hamilton, “Births to Teenagers.” 17. Hamilton and Ventura, “Birth Rates for U.S. Teenagers Reach Historic Lows for All

Age and Ethnic Groups”; US Census Bureau, “Median Household Income (in 2010 Inflation-Adjusted Dollars) by State Ranked from Highest to Lowest Using 3-Year Averages,” in Current Population Survey, 2009, 2010, and 2011 Annual Social and Economic Estimates (Washington, DC: US Census Bureau, 2012),

18. For more discussion, see Karen Sternheimer, “The Gloucester Pregnancy ‘Pact’: When Gossip Goes Global,” Everyday Sociology Blog, June 30, 2008,

19. M. D. Bramlett and W. D. Mosher, “Cohabitation, Marriage, Divorce, and Remarriage in the United States,” National Center for Health Statistics, Vital Health Statistics 23, no. 22 (2002): fig. 19, 18, 23; Casey E. Copen et al., “First Marriages in the United States: Data from the 2006–2010 National Survey of Family Growth,” National Health Statistics Report, no. 49 (2012),

20. Bramlett and Mosher, “Cohabitation, Marriage, Divorce, and Remarriage in the United States,” fig. 27, 20.

21. Martin et al., “Births: Final Data for 2009,” 8, table 15. 22. US Census Bureau, “Households and Families: 2010 American Community Survey,

1-Year Estimates” (Washington, DC: Government Printing Office, 2012), pid=ACS_10_1YR_S1101&prodType=table.

23. Richard Fry and D’Vera Cohn, “Living Together: The Economics of Cohabitation,” Pew Research Center, June 27, 2011,

doubled-since-mid-90s-only-more-educated-benefit-economically. 24. Forum on Child and Family Statistics, “Percentage of All Births, 1980–2005.” 25. Pew Research Center, The Decline of Marriage and the Rise of New Families

(Washington, DC: Pew Research Center, 2010),

26. Based on median salary of $32,184 for men and $20,957 for women. US Census Bureau, “Table 701: Median Income of People in Current and Constant (2009) Dollars by Race and Hispanic Origin,” in Statistical Abstracts of the United States: 2012 (Washington, DC: US Department of Commerce, 2012),

27. US Census Bureau, “People in Families by Family Structure, Age, and Sex, Iterated by Income-to-Poverty Ratio and Race: 2010,” Current Population Survey 2011, Annual Social and Economic Supplement,

28. Martin et al., “Births: Final Data for 2009,” tables 13 and 14. 29. Michael Medved, Hollywood vs. America: Popular Culture and the War on

Traditional Values. 30. Christine Wicker, “Dumbfounded by Divorce,” Dallas Morning News, June 17,

2000,; Bramlett and Mosher, “Cohabitation,” table 15, 49.

31. United Press International, “Poll: Marriage + Money = Happiness,” January 12, 2007,; Pew Research Center, The Decline of Marriage and the Rise of New Families (Washington DC: Pew Research Center, 2010),

32. General Social Survey, Happiness of Marriage, 1972–2006 (Chicago: National Opinion Research Center, 2007); Reuters, “Almost a Quarter of Britons Regret Marriage,” April 12, 2007,

33. Chrys Ingraham, White Weddings: Romancing Heterosexuality in Popular Culture, 9.

34. Medved, Hollywood vs. America. 35. US Census Bureau, Current Population Reports, “Table 52: Unmarried Couples by

Selected Characteristic: 1980 to 1999,” in Statistical Abstract of the United States: 2001 (120th Edition) (Washington, DC: Government Printing Office, 2007),; US Census Bureau, Current Population Reports, “Table 62: Unmarried-Partner Households by Sex of Partners and Type of Household: 2005,” in Statistical Abstract of the United States: 2008 (127th Edition) (Washington, DC: Government Printing Office, 2007),; Rose M. Kreider, “Increase in Opposite-Sex Cohabiting Couples from 2009 to 2010 in the Annual Social and Economic Supplement (ASEC) to the Current Population Survey (CPS),” Working Paper (Washington, DC: US Bureau of the Census, 2010),

36. Greenberg Quinlin Rosner Research telephone poll conducted with 892 adults aged eighteen to twenty-four throughout the United States, August 10–17, 2005; telephone poll conducted by the Washington Post, Kaiser Family Foundation, and Harvard University of 2,864 American adults, March 20–April 29, 2006; telephone poll conducted by Rasmussen Reports of 1,000 American adults, January 20–21, 2012,

155 37. US Census Bureau, Current Population Survey, 2011 Annual Social and Economic

Supplement, “Table A1: Marital Status of People 15 Years and Over, by Age, Sex, Personal Earnings, Race, and Hispanic Origin, 2011” (Washington, DC: Government Printing Office, 2012),

38. Ibid.; US Census Bureau, US Census of Population, 1900–1950, 1960, 1970, 1980, 1990, “General Population Characteristics” (Washington, DC: Government Printing Office, various years),

39. Telephone poll conducted by Pew Research Center for the People and the Press, 1,501 American adults surveyed September 5–October 2, 2006.



Media Health Hazards? Beauty Image, Obesity, and Eating Disorders

Can popular culture make people both obese and anorexic? This seems like a contradiction, but critics charge that both are effects of media culture. In particular, television, food advertisements, and video games are often blamed for contributing to child obesity (but rarely adult obesity). A Boston Globe article cites two prime causes, inactivity and overeating, and notes that “TV watching is linked to both of them.” “The simplest way to reduce obesity risk is to cut TV time,” an expert quoted in the article explains. Does television really make kids fat? The American Academy of Pediatrics thinks so and in 1999 suggested that doctors ask about children’s media use during checkups. One observer blames ads for junk food and watching television for creating “an obesity machine.”1

Yet at the same time, many people blame images on television for encouraging young girls to diet. “Look at Beyoncé and Hilary Duff and all the stars you see on TV and in magazines. They’re thin and they have flat stomachs and perfect everything,” a seventh-grade student told a Toronto newspaper, also noting that she had two classmates with anorexia.2 Fashion models and the magazines that feature them are also charged with contributing to body dissatisfaction. Additionally, online communities of people with anorexia and bulimia sometimes encourage and support each other in their quest to get even thinner. Although images in popular culture reinforce often impossible standards of beauty, the roots of these messages run deeper than popular culture. Likewise, spending more time in front of a screen and less in more rigorous activities can lead to weight gain. As we will see in this chapter, obesity has strong connections with race, ethnicity, and poverty; screen time is a part of the equation, but not the central underlying factor.

In this chapter we will critically examine both the complaints and the research to better understand the relationship between eating, health, and popular culture. More centrally, we will consider why popular culture once again finds itself in the center of focus and what structural causes we overlook in the meantime. Poverty, the continued objectification of women, and lack of access to quality health care seem less important when the more exciting explanations of television, advertising, video games, and fashion command our attention and interest.


By now you have likely heard about the trend in weight gain for children and


adolescents. Between 1980 and 2000, the number of children classified as overweight doubled for those aged two through eleven and tripled for adolescents twelve to nineteen.3 To some the reason is clear and the solution simple: turn off the TV.

Whether ads for sugary, fattening foods, or just the act of watching itself is blamed, many public health advocates, such as the Kaiser Family Foundation, believe that popular culture is the key to the problem. Here’s the crux of their argument: the long-term increase in weight gain comes from the intensified marketing of low-nutrient, high-calorie foods to children, which encourages snacking while watching television.

“Our children are spending more time than ever in front of the television,” writes Steven Gortmaker of the Harvard School of Public Health. He adds that because today children’s programming is on around the clock, “devoted to entertaining them all day,” that “kids are being taught to lead unhealthy lives from a very young age.”4 This sounds very reasonable on first glance. After all, sitting and eating in front of the television for long periods of time is a good way to gain weight. Spending a lot of time watching television means you are likely not doing something physically active like exercising. Problem solved?

Not so fast. Let’s consider some other factors in play here. First, those who blame television advertising presume that there are more ads today for kids than in the past. But a Federal Trade Commission (FTC) study from 2005 found that kids see fewer ads today than they did in the 1970s, when children weighed considerably less.5 I was a kid then and remember ads for cereals like Super Sugar Crisp and Sugar Smacks featuring fun cartoon characters that appeared regularly during Saturday-morning cartoons.

Once upon a time, advertising that cereal had sugar was a plus not just for kids but for parents, too. After World War II, cereal makers developed the technology to make sure added sugar stayed on cereal so parents wouldn’t have to take the time to add sugar themselves. It stayed on the flakes better, and most of it didn’t sink to the bottom of the bowl. At a time when parents were less concerned about obesity, a sweetened cereal meant their child would likely finish their breakfast.

Advertising sugary foods to children also doesn’t help us understand why adults are more likely to be overweight and obese than children or teens and why their rates rose even faster. Four times as many men over sixty were considered overweight in 2000 than in 1960, and three times as many women twenty to thirty- four have become overweight in this same time period. In a 2009–2010 study, nearly 36 percent of American adults were classified as obese (defined as having a body mass index [BMI] of thirty or higher). By contrast, 17 percent of twoto nineteen-year-olds were deemed obese (weight above the ninety-fifth percentile for their age), with older teens more likely to be obese than young children.6


Yet critics use only the television explanation for children and adolescents, implying that young children are the most vulnerable to advertising. These explanations ignore the more serious problem of adults who are overweight and obese, the population more vulnerable to serious and immediate health risks of cardiovascular disease, type 2 diabetes, and many forms of cancer.

Certainly, public health officials are paying serious attention to overweight adults, but the television explanation is curiously applied only to children. While adults who sit and eat in front of the TV for long stretches aren’t doing themselves any favors, the existing research linking childhood obesity and television is actually much weaker than we are often told.

As with other reports that locate popular culture as a source of significant problems, the New York Times and other major newspapers tell us that advertising is a major culprit. A 2005 Times article describes “compelling evidence linking food advertising on television and the increase in obesity” based on a study by a federally appointed advisory group. The author of the study describes their research as “the nail in the coffin” in spite of the fact that we cannot definitely make a cause-effect connection between advertising and children’s weight. Tom Harkin, the senator who requested the study, told the Times that advertising must be effective in getting kids to buy their products; otherwise, advertisers wouldn’t spend the billions that they do.7

This claim that advertising must work because industries do it is based on circular logic: in effect, it is saying that something must be because it is. Industries do spend a great deal of money on advertising, but this doesn’t mean it necessarily has the outcome advertisers intend. A 2007 New York Times article, “Study Says Junk Food Still Dominates Youth TV,” also focuses on food ads as a central contributor to child obesity, after a Kaiser Family Foundation study observed that 50 percent of the advertising during children’s programming are for food ads, mostly snacks and fast food. “TV Helped Create the Child Obesity Problem,” a Washington Post headline asserts.8 Stories like these make it seem as though television is a major cause of child obesity.

A closer examination of the research reveals that the connection is not so simple. Researchers have been studying the possible link between television and obesity since at least 1985, when a large study found a correlation between television viewing and obesity.9 A correlation indicates an association, but not causation. Some studies found similar connections, yet others did not.

A study published in 2004 found that television was not related to weight, but that video game playing had a complex relationship with children’s weight: those whose weights were higher played moderate levels of video games, whereas thinner kids played both low and high amounts. The authors found an equally complicated relationship with computer usage—those who were heavy used the


computer very little or a lot, with lower weight associated with moderate levels of use.10 Both short-term and long-term studies have been mixed in their findings.

This doesn’t mean that watching television for long periods of time with a lot of snacking and little exercise is a good idea, just that the causes of obesity are more complex than media use. Nonetheless, experimental interventions, where one group is encouraged to watch less or no television or video games, have found declines in weight in the group that is supposed to watch less television.11

But these studies ignore one major question: what factors lead to more television watching and other sedentary activities in the first place? Rather than just a bad choice, watching more television and staying indoors have causes themselves. For one, children in low-income urban areas often have few safe places to play outdoors. Parents’ work schedules often require these kids to have many hours with little supervision, and while watching television or playing video games inside may not be good for their waistlines, they keep them safe from potential harm on dangerous city streets. Steven Gortmaker, of Harvard’s School of Public Health, barely acknowledges the significance of these issues in his Boston Globe op-ed. “Parents of various socio-economic backgrounds and ethnicities are reluctant to acknowledge the problem because they often feel that their parenting skills are being called into question,” Gortmaker notes, suggesting that an attitude adjustment is all that these parents really need.12

This is about more than stubborn, prideful parents, but rather large-scale structural patterns. The Centers for Disease Control and Prevention’s data on Americans and weight have found that African American and Latino children and adolescents are more likely to be overweight than their white counterparts. In data collected from 2009 to 2010, six- to eleven-year-old African Americans were more than twice as likely to be in the ninety-seventh weight percentile as white children in this age group, while Latino children were more than 50 percent more likely to be in this percentile than white kids were. For twelve- to nineteen-year- olds, the difference is most pronounced among girls, as African Americans are nearly twice as likely to be in the ninety-seventh percentile than white girls are, whereas the differences for boys are less dramatic, paralleling adult patterns, where significant ethnic disparities are found only in women.13

African Americans and Latinos are also significantly more likely to be poor than whites. According to 2010 US Census data, 27 percent of African Americans and Latinos live in poverty, compared with 10 percent of whites and 12 percent of Asian Americans.14 Poor people of color are more likely to live in areas of concentrated poverty in urban areas with fewer playgrounds and safe spaces.15 These neighborhoods also have fewer grocery chains and little affordable high- quality fresh produce, but instead have an abundance of low-cost fast-food restaurants. When public health officials ignore the very real challenges parents in


lower-income communities face, they fail to fully address the causes of obesity. Beyond socioeconomic status, obesity itself may be a causal factor for watching

more television. Low self-esteem and social rejection, which many overweight children experience, may keep them inside and perpetuate the weight-gain cycle. And where are adults in this equation? Parents who lead sedentary lives are likely to be a major determinant here. As anyone who has tried to lose weight knows, changes that may seem simple to others might not be as easy for people for whom weight is a deep-seated issue.

If it were as simple as watching less television, eating healthier, and exercising more, there would be no need for the weight-loss industry. From the unregulated products hawked on infomercials to mainstream pharmaceuticals and bariatric surgery, obesity is, excuse the pun, a growth industry. While turning off the TV seems like an easy solution, it fails to take into account the complex realities of today’s health care needs and the economic realities of many families dependent on cheap, high-fat food living in neighborhoods with few safe spaces for children to play. Whereas poverty ironically now is a predictor of obesity, when starvation happens in the United States and other industrialized nations, it is often the result of an eating disorder.

Anorexia and Bulimia

Just as critics blame television and other forms of popular culture for weight gain, they also blame celebrities, magazines, websites, the fashion industry, and even Facebook for contributing to eating disorders. How can watching images of mostly underweight people on television make viewers want to eat both more and less at the same time? We might carry this contradiction out by suggesting that some people respond differently to the same images, or that popular culture makes people both heavier and hate their bodies more, but no research supports this idea.

Nonetheless, stories about “thinspiration” websites with pictures of participants and celebrities seemingly starving to death next to advice about how to continue eating disorders are unnerving. It is very compelling to think that seeing super- skinny models and other celebrities in magazines, movies, and fashion-show runways causes people—especially young girls—to develop eating disorders.

If this is the case, it is the extreme minority of people who are impacted in this way. It’s hard to know for certain, but estimates of the number of Americans with anorexia or bulimia or both range from 7 to 9 million.16 Focusing only on females, the National Institute of Mental Health notes that 0.5 percent to 3.7 percent of females will develop anorexia, and 1.1 percent to 4.2 percent will suffer from bulimia at some point in their lives. Though rare, these are serious disorders that can lead to major health complications and death. Although the conventional


wisdom has been that this is a female problem, a 2007 Harvard University study found that a quarter of their sample with eating disorders were male.17

Even if very few people develop eating disorders, the fashion industry seems to employ many of them. Researchers who interviewed young women found that many used modeling and other activities like gymnastics as a cover for anorexia, which suggests that rather than creating eating disorders, the fashion industry may draw some who are already anorexic and validate their behavior. In 2006 two young models in Brazil and Uruguay died, apparently due to the effects of starvation. This led to calls for change within the fashion industry. That same year, Spain declared that all runway models must have a body mass index of at least 18; for instance, a five-foot-ten model needs to weigh at least 126 pounds to meet this threshold.18

Rather than change models’ body size, critics argue, this is likely to simply decrease the amount of runway work in Spain. Italy’s Milan-based Chamber of Fashion proposed that models hold a license, obtainable after a panel of health experts evaluates their mental health status and verifies that they have a BMI of at least 18.5 to be certified healthy. The Australian Medical Association called for a similar restriction for models in Australia. Going a step further, Buenos Aires province in Argentina passed a “law of sizes,” which requires that clothing shops carry larger sizes or face a fine or even be forced to close.19

In 2008 France’s National Assembly passed a bill making it illegal to publicly “incite extreme thinness.” This means that creators of websites like the pro- anorexia and pro-bulimia sites as well as magazines could be fined up to the equivalent of forty-seven thousand dollars and even jailed if they appear to be providing advice and encouragement for people to become dangerously underweight. Specifically, any attempt to “provoke a person to seek excessive weight loss by encouraging nutritional deprivation that would have the effect of exposing them to risk of death or endangering health” would become illegal, although perhaps difficult to prove.20 As of this writing, the bill is pending in the French senate.

Similar restrictions would violate the First Amendment in the United States, but the fashion industry here has faced pressure to make changes nonetheless. In 2007 the Council of Fashion Designers of America created a list of recommendations, including scheduling fittings for younger models earlier in the day to ensure proper sleep patterns, asking designers to “identify models with eating disorders,” and providing “more nutritious backstage catering.”21

But to paraphrase the cliché, you can bring a model to food, but you cannot make her eat, and the American industry has made it clear that they will not impose a minimum BMI for models. However, in 2012 Vogue publicly stated that it would not use models younger than sixteen or who appeared to have an eating disorder.22

Clearly, many of the young women—and teen girls—who walk the runways and


whose images appear in fashion magazines are extremely thin and perhaps have eating disorders. Likewise, gossip magazines are quick to point out when celebrities lose (and gain) a great deal of weight, and producers are notorious for suggesting that stars lose weight. Working in the entertainment industry and living in the limelight can certainly promote unhealthy weight loss. But what about everybody else?

It may seem like a logical extension that people—especially young girls—who see these teens and young women glorified may themselves develop eating disorders. But the connection is not so simple. Psychologists who research eating disorders in all their complexity are typically reluctant to site popular culture as a key causal factor.

Michael Levine, a professor of psychology at Kenyon College who studies eating disorders, told the New York Times in response to the proposed French law that “you’re going to be hard pressed to demonstrate in a very clear way that [Web] sites have a direct negative effect” in causing eating disorders. Michael Strober, director of UCLA’s Eating Disorders Program told the Los Angeles Times that changes in the fashion industry would not necessarily reduce the incidence of eating disorders. “I don’t think you can assume that there will be a dramatic protective effect if the fashion industry alters its standard of body aesthetic,” he told the Times, but added that the attention to the problem of eating disorders was in itself positive, despite the lack of any proven causal link between popular culture and eating disorders. Ian Frampton, a psychologist at Exeter University, told the Times of London, “We need to move away from this idea that supermodels are to blame. It is probably not good for them to look as they do. But for anorexics, the desire not to eat and to be thin seems to be already in them and not something they can pick up by looking at a magazine. There were, after all, anorexics before super-thin models.”23

Rather than a “virus” spread through media images, anorexia and bulimia’s roots are much more tangled. While several studies have demonstrated a relationship between reading fashion magazines and symptoms of eating disorders (but not necessarily the actual development of a diagnosable disorder), it is very likely that those highly focused on their appearance would be drawn to such magazines.24 So we cannot conclude from these studies that magazine reading causes eating disorders.

Several studies have found little relationship between television and symptoms of eating disorders.25 Respondents who report wanting to look like celebrities are more likely to increase their physical activity level, which is not necessarily a negative thing, but they are also more likely to use diet supplements. Other research has pointed to family and peers as a more central influence in body image and symptoms of eating disorders. A study published in the Journal of Marriage and


Family found that a critical family environment and having domineering parents are key factors in adolescents with eating disorders.26

Another study of girls aged eight to eleven found that peer body dissatisfaction is the strongest predictor of a girl’s own dissatisfaction with her body (the study found that the more children’s television programming the girl watched, the less dieting awareness she had). Watching more music videos and reading teen magazines were positively correlated with more dieting awareness, but these relationships are not as strong as peer influence.27

Media images of young women in particular do merit our attention, but only considering a cause-effect relationship with eating does not go far enough. Images of unrealistically thin young women reflect a very narrow version of beauty and the way in which women are routinely valued based on their appearance in popular culture and, in many cases, everyday life.

As I discussed in Chapter 6, these images reflect the contradictions of gender and power. While women have accomplished a great deal over the past several decades, the plethora of images of thin, young, and often white women regarded as beautiful serves as a reminder that their self-worth should not be severed from their appearance. From television shows based on weight loss and makeovers to the endless number of infomercials promising to offer self-improvement, there are numerous ways in which women are encouraged to spend their money to meet a narrow ideal.

There is nothing wrong with feeling good and looking good, yet it is often characterized as an imperative rather than an option. As cultural critic Susan J. Douglas writes, deriding a woman’s appearance is often used to minimize any critiques she might have about gender in society. The much-maligned “ugly feminist” is a prime example: her ideas are dismissed because she fails to meet— or even strive for—narrow beauty ideals. Her condemnation serves as a warning to others that it is dangerous to challenge accepted gender and beauty regimes.28

Men also face pressure to live up to often unrealistic physical ideals of being tall, muscular, and strong, rooted in notions of hegemonic masculinity, which emphasizes the need for men to physically dominate others. In addition to magazines, actors, and athletes promoting this image, coaches for even the youngest participants in sports can also provide added pressure, leading some to use steroids.

While glamorizing super-thin models and celebrities is problematic on a number of levels, body dissatisfaction and eating disorders have social and environmental roots beyond the media. Yes, popular culture has a meaningful impact on what we think of as beautiful, and this image is unobtainable for most. But because the fashion and celebrity industries provide us with so many examples of extremely thin people, we often overlook other important social factors.


As I discussed earlier in this chapter, race and poverty create important disparities in rates of obesity. But rather than just genetic or cultural differences, these differences have sociological roots as well. Sociologist Becky Wangsgaard Thompson interviewed women of varied racial, ethnic, and socioeconomic backgrounds and argues that we need to look beyond the “culture of thinness” explanation to understand what she terms “eating problems.” Traditionally, the culture-of-thinness approach focused only on young, middle-class, white, heterosexual girls and women and ignored most others. Through her interviews, Thompson found that many of the women who are compulsive eaters or dieters had been sexually or physically abused, or both; eating, dieting, or overexercising became a coping mechanism. Likewise, some of the women noted that the stress of poverty or dealing with racism or homophobia also led to their behavior. Whereas the culture-of-thinness explanation characterizes women as simply vain and appearance obsessed, Thompson reminds us that eating problems are often not really about looks at all.29

Sociologists Penelope A. McLorg and Diane E. Taub also spoke with women with anorexia and bulimia, as well as observing them in university self-help group meetings. They describe how people with eating disorders often have parents who are very focused on diet and exercise, so they receive high praise for their thin appearance from both families and friends.30 To understand the social factors that influence body dissatisfaction, we have to go beyond just looking at images in popular culture.

There are many contradictions to consider here as well. African American girls report that they are more satisfied with their bodies and focus less on trying to lose weight, a good thing if we are looking through the eating-disorder lens, but not if we focus on obesity; as noted earlier, African American girls and women are more likely to be obese than their male or white peers.31 Public health officials also ironically encourage body dissatisfaction, particularly in those who are overweight and possibly at risk for weight-related ailments. So while research suggests that those who read a lot of beauty magazines might feel less satisfied with their bodies, at least in the short term, this does not mean that most of these people will develop eating disorders, particularly since most research is done with a non–eating disordered population.32

Do media images contribute to a sense of body dissatisfaction? Quite possibly. But contributing is not the same as causing the problem in the first place, as those who are troubled about their bodies are likely to seek out information on weight loss and unfavorable comparisons that justify their negative sense of self.33

Still, when super-thin models represent images of beauty that are not only unrealistic but unhealthy, it is worth asking why extreme thinness became equated with beauty. Americans have been ambivalent about weight for a long time. As a


nation of both plenty and poverty, we have roots in the Puritan ethic of self-restraint but also celebrate conspicuous consumption. It makes sense that we would be profoundly confused about weight and the question of how much is enough. As cultural historian Hillel Schwartz observes, “On the one hand we seem to want more of everything; on the other hand we are suspicious of surplus.”34

To understand the contemporary American focus on thinness, we have to go back much further than we might think. Further back than Twiggy, the boyishly thin supermodel of the late 1960s, and well before the current crop of stars accused of making anorexia a chic fashion statement. Many people look back to the 1950s, during Marilyn Monroe’s heyday, and mistakenly conclude that throughout history, being full-figured was considered beautiful and the body ideal just became thinner and thinner recently. We need to go much further back to really understand the context of bodies and beauty.

“Fasting girls,” who claimed to abstain from eating for weeks, months, or even years, have gained notoriety throughout history, from the Middle Ages through to the nineteenth century. Rather than being seen as vain or mentally ill, these girls were thought to embody spiritual purity and holiness. Some claimed to eat nothing but the holy Eucharist wafer and thus be filled by the body of Christ alone. Many at the time thought that their lack of consumption was a bona fide miracle. Most of these claims were later debunked and found to be fraudulent, but nonetheless their existence reveals the historical connection between declining food and sanctity.35

As religious doctrine began to lose influence and science became more predominant, the meaning of thinness also shifted. As the medical profession gained status, doctors gradually took over as experts in matters of the body and health. But more important, another American institution would hold perhaps the biggest influence on body size and normality: the insurance industry. Starting in the 1830s, life insurance rates privileged thinner clients. Based on mortality rates of mainly white, middle-class people seeking policies, the definition of healthy weight fell consistently below national averages. And the industry frequently redefined healthy weight downward in order to increase premiums on a large number of customers, even as average heights increased.36

So an emphasis on thinness is not new. But because emaciated celebrities often parade before us, it is easy to see them as the central impetus for young girls to hate their bodies. American girls have been monitoring their body size and comparing themselves unfavorably to others for at least a century.37 Ironically, the rise of feminism coincided with increased attention to weight. The women’s movement of the late nineteenth century challenged the restrictive corsets that dominated middle- class female fashion at the time. Made with bone or metal stays, these undergarments were not just uncomfortable but impeded women’s movement, breathing, and digestion at a time when doctors encouraged more exercise and


focused on gastric health. Obesity transitioned from being perceived as sinful—a sign of gluttony—to an illness that physicians frequently warned the public to prevent. Women learned to create internal control and take over the function that the corset once served. In the later decades of the nineteenth century, ready-made clothes became available at department stores and mail-order catalogs, further demanding conformity. Scales became more widely available at nearly any grocery or druggist’s shop in an attempt to attract customers, encouraging everyone to monitor their weight.38

This brings us to a central point about bodies deemed too thin and too fat: body dissatisfaction is big business. Beyond cultural shifts, genetic inheritance, and media images lies capitalism. If we all suddenly decided our bodies are just fine, we would no longer channel billions of dollars into the appearance industry. Gym memberships, cosmetic surgery, diet centers, body-shaping clothing, and drug and supplement manufacturers are just a few of the industries that benefit from helping to nurture body dissatisfaction. As Maggie Wykes and Barrie Gunter describe in their book The Media and Body Image: If Looks Could Kill, beauty holds “cultural currency,” a commodity that needs to be virtually unattainable in order to maintain its value. So body dissatisfaction is not just an unfortunate side effect of beauty magazines, but in some ways a central facet of the way we do business.

Health Hazards

While we are busy worrying about what’s advertised on television or how thin models and celebrities are, America’s health care system is in a state of emergency. The political rancor surrounding new health care policy suggests that it will not be solved easily. The biggest threat to American children’s health is not the fashion or advertising industries, but limited access to health care. In 2000 the World Health Organization listed the US health care system as thirty-seventh in the world—well below other industrialized nations in the top ten and a notch below Costa Rica’s, largely because of the large disparity of available health care based on income.39

It’s not just low-income people who lack health insurance. A growing number of people considered middle income have no health insurance; in 2010 more than one in five families with earnings between $25,000 and $49,999 had no health insurance, and 15 percent of those who earned between $50,000 and $74,999 had no coverage. Between 2000 and 2010, the number of uninsured Americans grew by 13.3 million. Ten percent of all children did not have health insurance coverage, many of them with working parents who cannot afford the extra costs of health care coverage. As a 2008 60 Minutes story detailed, many struggling families have forgone regular medical care and waited in line for hours to be seen at a volunteer center staffed by doctors, dentists, and nurses who normally donate their services in


developing nations.40 This problem is likely to continue, as fewer employers offer health insurance—

down from 68 percent of employers in 2000 to 60 percent in 2011. For jobs that do offer insurance, family premiums have also skyrocketed, from an average of under $6,000 a year in 1999 to more than $15,000 in 2011, making health insurance increasingly unaffordable to many working families who earn too much for state or federal aid. The increase in health care costs also increases expenses for businesses, which must either raise prices for consumers or freeze wages for employees in order to afford to provide insurance.41 According to the authors of The Fragile Middle Class, more than half of bankruptcies in the United States are caused by the costs associated with a serious health problem.

The Children’s Defense Fund (CDF) published a report in 2006 that found that the interplay between environmental factors, economic disparities, and the lack of quality health care has led to a growing gap in life expectancy. According to a 2009 CDF report, about 20 percent of Latino and American Indian children had no health insurance, while about 12 percent of African American children and 11 percent of Asian American/Pacific Islander children had none. By contrast, 8 percent of white children had no health insurance. The CDF’s 2006 study found that 3 percent had no usual place of health care. Nearly 6 percent of African American and 12 percent of Latino kids had no regular health care source.42

A study published in the New England Journal of Medicine in 2007 found that when children do see a doctor, the quality of care is often lacking (the authors note that care for adults is similar). Researchers observed that children received appropriate care from physicians less than half of the time. Children with acute medical problems got the proper care the most, nearly 68 percent of the time according to the study, but just over half of the time (53 percent) for chronic conditions and just under 41 percent of the time for preventative measures.43 The study found that just 31 percent of children aged three to six were weighed and measured, while only 15 percent of adolescents stepped on a scale.44 With this in mind, it seems unrealistic to encourage pediatricians to ask extended questions about children’s media habits when the basics are often left out of routine checkups for those who can afford them.

These racial and ethnic disparities carry over into other arenas of life, too: lower-income children of color are more likely to attend schools in cities without playgrounds and with large (if any) physical education classes.45 The 2006 CDF report recommends a community-based approach to dealing with obesity, recognizing the need to address infrastructural issues like transportation and group programs rather than individual-based suggestions like just turning off the television set. CDF also suggests creating culturally based activities, such as using hip-hop or Latin music and dance and incorporating Native American traditions of


running into community activities. Rather than pronouncements from mostly white, upper-middle class researchers

for less television and more self-control, real change comes from partnering with communities and recognizing the realities of their circumstances in order to create opportunities for leading healthier lifestyles. Yes, it’s a good idea for those who are on the heavy side to watch less television and get more exercise, and we ought to critically question why models and celebrities are encouraged by their employers to maintain an unhealthy weight. We should think about why we often narrowly define beauty and the health risks models and celebrities take in order to achieve this impossible ideal. In asking these questions, however, we need to keep the big picture in mind, that popular culture is not the biggest threat to our health, as fewer and fewer Americans have access to any health care at all. Focusing on popular culture encourages us to focus on individual choices as the main cause of health problems and overlook structural conditions that contribute to poor health.

Notes 1. Cathryn M. Delude, “Your Health: Time to Take a Vacation from Television as

School Ends, Keep Kids Healthy by Limiting TV Time,” Boston Globe, June 10, 2003, C14; Marilyn Elias, “Pediatricians Defend Media Exam; They Cite TV’s Effects on Health,” USA Today, August 3, 1999, D10; Nat Ives, “As National Geographic Explores Obesity, Critics Question Food Ads in Its Magazine,” New York Times, July 21, 2004, C1.

2. André Picard, “Dieting Commonplace Among Preteen Girls,” Globe and Mail (Toronto), May 11, 2004, A21.

3. Cynthia L. Ogden et al., “Prevalence and Trends in Overweight Among US Children and Adolescents, 1999–2000,” Journal of the American Medical Association 288 (2002): 1728–1732, table 4.

4. Steven Gortmaker, “Twin Scourges for Kids: Obesity and Television,” Boston Globe, October 19, 2004, A23.

5. Caroline E. Mayer, “Fewer Food Ads in Kids’ TV Diet, New Study Finds,” Los Angeles Times, July 16, 2005, E15.

6. Katherine M. Flegal et al., “Prevalence and Trends in Obesity Among US Adults, 1999–2000,” Journal of the American Medical Association 288 (2002): 1723–1727, table 4; Katherine M. Flegal et al., “Prevalence of Obesity and Trends in the Distribution of Body Mass Index Among U.S. Adults, 2009–2010,” Journal of the American Medical Association 307 (2012): 491–497, doi=10.1001/jama.2012.39; Cynthia L. Ogden et al., “Prevalence of Obesity and Trends in the Distribution of Body Mass Index Among U.S. Children and Adolescents, 2009–2010,” Journal of the American Medical Association 307, no. 5 (2012): 483–490,

7. Marian Burros, “Federal Advisory Group Calls for Change in Food Marketing to Children,” New York Times, December 7, 2005, C4.

8. Elizabeth Olson, “Study Says Junk Food Still Dominates Youth TV,” New York Times, March 29, 2007, C10; Cecilia Capuzzi Simon, “Move It, Kid; TV Helped Create the Child Obesity Problem. Can It Help Solve It?,” Washington Post, February 25, 2003,

F1. 9. William Dietz and Steven Gortmaker, “Do We Fatten Our Children at the TV Set?

Obesity at Television Viewing in Children and Adolescents,” Pediatrics 75 (1985): 807– 812.

10. Elizabeth A. Vandewater, Mi-suk Shim, and Allison G. Caplovitz, “Linking Obesity and Activity Level with Children’s Television and Video Game Use.” See also T. Robinson et al., “Does Television Viewing Increase Obesity and Reduce Physical Activity: Cross- Sectional and Longitudinal Analyses Among Adolescent Girls,” Pediatrics 81 (1993): 273– 280; R. Durant and T. Baranowski, “The Relationship Among Television Watching, Physical Activity, and Body Composition of Young Children,” Pediatrics 94 (1994): 449– 455.

11. Henry J. Kaiser Family Foundation, The Role of Media in Childhood Obesity (Washington, DC: Henry J. Kaiser Family Foundation, 2004).

12. Gortmaker, “Twin Scourges.” 13. Ogden et al., “Prevalence of Obesity and Trends”; Flegal et al., “Prevalence of

Obesity and Trends.” 14. US Census Bureau, Income, Poverty, and Health Insurance Coverage in the

United States: 2010 (Washington, DC: Government Printing Office, 2011),

15. For more discussion, see William Julius Wilson, More Than Just Race: Being Black and Poor in the Inner City.

16. For examples of estimates, see 17. National Institute of Mental Health, “The Numbers Count: Mental Disorders in

America” (Washington, DC: National Institutes of Health, 2008), america.shtml#Eating; Sandra G. Boodman, “Eating Disorders: Not Just for Women,” Washington Post, March 13, 2007, HE1.

18. Penelope A. McLorg and Diane E. Taub, “Anorexia Nervosa and Bulimia: The Development of Deviant Identities,” Deviant Behavior 8 (1987): 177–189; Siri Agrell, “Are Thinness Laws Too Heavy-Handed?,” Globe and Mail (Toronto), April 19, 2008, A14.

19. Eric Wilson, “Health Guidelines Suggested for Models,” New York Times, January 6, 2007, C3; AAP General News Wire, “NSW: Fashion Week Moves to Ban Skinny Models,” AAP News (Australia), March 4, 2007, 1; Annie Kelly, “A Tiny Step for Womankind,” New Statesman, March 27, 2006,

20. Devorah Lauter, “France May Make It Illegal to Promote Extreme Thinness,” ABC News, April 15, 2008,; Doreen Carvajal, “French Bill Takes Chic Out of Being Too Thin,” New York Times, April 16, 2008, A6.

21. Eric Wilson, “Health Guidelines Suggested for Models,” New York Times, January 6, 2007, C3.

22. Samantha Critchell, “Vogue Bans Too-Skinny Models from Its Pages,” Associated Press, May 4, 2012, 204948851.html.

23. Carvajal, “French Bill”; Valli Herman, “Is Skinny Going Out of Style?,” Los Angeles Times, December 16, 2006, E1; Fran Yeoman, “Anorexia ‘Cannot Be Picked Up by Looking at Photographs of Super-Thin Models,’” Times (London), December 17, 2007,


Home 24. For examples, see Kimberley K. Vaughan and Gregory T. Fouts, “Changes in

Television and Magazine Exposure and Eating Disorder Symptomatology,” Sex Roles 49 (2003): 313–320; Renee A. Botta, “For Your Health? The Relationship Between Magazine Reading and Adolescents’ Body Image and Eating Disturbances,” Sex Roles 48 (2003): 389–400.

25. See Vaughan and Fauts, “Changes in Television”; and Alison E. Field et al., “Exposure to the Mass Media, Body Shape Concerns, and Use of Supplements to Improve Weight and Shape Among Male and Female Adolescents,” Pediatrics 116 (2005): 484.

26. Field et al., “Exposure to the Mass Media”; Susan Haworth-Hoeppner, “The Critical Shapes of Body Image: The Role of Culture and Family in the Production of Eating Disorders,” Journal of Marriage and Family (2000): 212.

27. Hayley K. Dohnt and Marika Tiggemann, “Body Image Concerns in Young Girls: The Role of Peers and Media Prior to Adolescence,” Journal of Youth and Adolescence 35 (2006): 141–152.

28. Susan J. Douglas, The Rise of Enlightened Sexism: How Pop Culture Took Us from Girl Power to Girls Gone Wild.

29. Becky Wangsgaard Thompson, “’A Way Outa No Way’: Eating Problems Among African American, Latina, and White Women,” Gender and Society 6 (1992): 546–561.

30. McLorg and Taub, “Anorexia Nervosa.” 31. See Renée A. Botta, “The Mirror of Television: A Comparison of Black and White

Adolescents’ Body Image,” Journal of Communication 49 (2000): 144–159. 32. Maggie Wykes and Barrie Gunter, Media and Body Image: If Looks Could Kill,

189, 191. 33. Ibid., 175. 34. Hillel Schwartz, Never Satisfied: A Cultural History of Diets, Fantasies, and Fat,

5. 35. See Joan Jacobs Brumberg, Fasting Girls: The History of Anorexia Nervosa, chap.

2. 36. Schwartz, Never Satisfied, 27, 157. 37. Brumberg, Fasting Girls, 162. 38. Schwartz, Never Satisfied, 56, 65, 160. 39. World Health Organization, “Health System Attainment and Performance in All

Member States, Ranked by Eight Measures, Estimates for 1997,” in The World Health Report, 2000 (Geneva: WHO, 2000), annex table 1, 152. WHO has since stopped ranking nations.

40. Carmen DeNavas-Walt, Bernadette D. Proctor, and Jessica C. Smith, Current Population Reports, Income, Poverty, and Health Insurance Coverage in the United States: 2010 (Washington, DC: Government Printing Office, 2011),–239.pdf, table 8, C-1 and C-3; “U.S. Healthcare Gets Boost from Charity,” CBS News, February 28, 2008,

41. Henry J. Kaiser Family Foundation, Employer Health Benefits 2011 Annual Survey (Menlo Park, CA: Kaiser Family Foundation, 2011), page=abstract&id=2.

42. Children’s Defense Fund, “Improving Children’s Health: Understanding Children’s Health Disparities and Promising Approaches to Address Them” (Washington, DC:

Children’s Defense Fund, 2006), 9, docID=1781; Children’s Defense Fund, “Disparities in Children’s Health and Health Coverage,” (Washington, DC: Children’s Defense Fund, 2009), disparities-factsheet.pdf; Children’s Defense Fund, “Improving Children’s Health,” 13.

43. Rita Mangione-Smith et al., “The Quality of Ambulatory Care Delivered to Children in the United States,” New England Journal of Medicine 357 (2007): 1515–1523,

44. RAND Corporation Press Release, “New Study Finds Serious Gaps in Health Care Quality for America’s Children,” October 10, 2007,

45. Karen Sternheimer, Kids These Days: Facts and Fictions About Today’s Youth, 44– 47.



Does Pop Culture Promote Smoking, Toking, and Drinking?

If we can be sure about anything, it is that alcohol and drug abuse contribute mightily to a variety of problems. Family disruption, domestic violence, general violence, and property crimes often share roots in substance abuse. Estimates from the Substance Abuse and Mental Health Services Administration suggest that about 5 million alcohol abusers have children under eighteen living with them. These parents were also very likely to smoke cigarettes and use other drugs, both legal and illegal, and their homes were more turbulent than those without substance- abusing parents. A 2001 study found that sons of alcoholics who also exhibit antisocial behaviors perform worse in school and thus become more likely to fail academically and perpetuate the cycle of substance abuse.1

It is highly likely that these families also tax local social service agencies, the foster care system, and occasionally the criminal justice system. Smoking contributes substantially to health care costs, increasingly the likelihood that users will develop a number of serious illnesses, including America’s number-one killer, cardiovascular disease. Drug and alcohol abuse also led to approximately 4.6 million emergency room visits in 2009. The National Institute on Drug Abuse estimates the costs of alcohol and drug abuse to be $600 billion a year, including the costs of health care, criminal justice, and lost productivity.2

Understanding the causes of substance abuse is obviously in our national interest. Yet typically, when we hear about smoking, drinking, and drug use, we focus only on teens and primarily on popular culture. When a YouTube video appeared of teen star Miley Cyrus smoking salvia from a bong on her eighteenth birthday in 2010, critics pounced that she was a bad role model. “Miley is a star and young kids are going to emulate her behavior,” a former member of the California state legislature who tried unsuccessfully to make the substance illegal told TMZ.3

Cyrus is not the only celebrity who has been called out. For instance, in 2008 hip-hop mogul Sean “Diddy” Combs appeared in ads for Ciroc vodka, adding his brand of cool to portray Ciroc as the drink of the high-end club scene. Does Combs’s endorsement encourage young fans to drink alcohol? What about the CW show Gossip Girl, which frequently features teen drinking in a manner that Newsday calls a “fairly accurate depiction of teen partying across the country”? Likewise, do celebrities who smoke in their movie roles promote teen smoking? Can music provide the impetus for drug use? Is Facebook a “gateway drug,” encouraging teens who see pictures of their friends and acquaintances drunk or


using drugs?4 As we will see in this chapter, we tend to associate substance use with teens,

despite the fact that adults are actually more likely to smoke, drink, and use illegal drugs. Built on the faulty assumption that kids are both uniquely impacted by media messages and the key players in the substance-abuse problem, many people presume that popular culture is the central culprit in creating use and abuse. Both of these assumptions help us overlook significant economic, ethnic, and gender disparities that tend to get lost when we focus so much on popular culture.


If there is anything to celebrate about young people today, it is the tremendous decline in the number who smoke cigarettes. According to Monitoring the Future (MTF), a nationally representative study of high school students conducted by the University of Michigan each year since 1975, about 40 percent of high school seniors reported that they had ever tried smoking in 2011, the lowest percentage since the survey began. By contrast, in the 1970s nearly three-fourths of high school seniors had smoked a cigarette.5 The number reporting smoking a half a pack or more each day also declined to about 4 percent in 2011, down from nearly 20 percent in the late 1970s.6 Good news, right? Not if you read many of the stories about this study, which bemoaned that the declines slowed compared with the past years.7

Reasons for this positive shift include an increase in public information about the dangers of smoking, taxes making cigarettes more expensive, and unique public service campaigns created by teens for teens that avoid condescension, to name just a few. Perhaps the most important explanation is the simplest: that their parents are less likely to smoke. Between 1965, when the CDC first gathered data, and 2010, the percentage of American adults who smoke was cut by more than half, from 42 percent to 19 percent.8

Nonetheless, it is still important to understand what factors make someone more likely to smoke, since smoking is a major risk factor for a wide range of serious health problems. During the 1990s, private tobacco-industry memoranda became public and confirmed what many antismoking groups had long suspected: cigarette makers knew that the nicotine in cigarettes was addictive. This revelation led to a landmark settlement with many states to reimburse public health costs.

At one point, the Food and Drug Administration even pondered regulating tobacco products as drugs, a move that would have seriously threatened the industry’s profitability. But the FDA backed off this threat; some tobacco makers reorganized, changed their names, and have largely continued unfettered. The focus


gradually shifted away from the cigarette manufacturers and onto other industries that were alleged to be key sources of smoking in teens: movies and advertising.

As sociologist Mike A. Males discusses in his book Framing Youth: Ten Myths About the Next Generation, teens became an easy target. First, because most smokers are actually adults, the tobacco industry would lose little business (and gain some good PR) by coming out against teen smoking. Second, smoking became defined as a problem caused by reckless youth who are allegedly easily swayed by peers and popular culture. And finally, there is little political downside to focusing on teens. Adult smokers tend to resent politicians’ attempts to restrict where and when they can smoke, and unlike most teens, they can vote.

Even nonsmokers sometimes see regulations like no smoking in public areas, as some local ordinances now require, as examples of the government overstepping individual freedoms. In their research, sociologists Justin L. Tuggle and Malcolm D. Holmes found that a smoking ban in restaurants and bars in a northern California county was met with a great deal of resistance from its working-class residents, who saw the ban as a government intrusion.9

The popular culture argument enables us to overlook the biggest influence on teen smoking—family members who smoke—and blame Hollywood. I’m not suggesting that images of smoking have no impact and don’t merit our critical scrutiny, only that we often get sidetracked by what is at best one of many factors in the process of deciding to smoke. And focusing on popular culture itself seems to implicate teens, who we presume are more vulnerable than the rest of us to media influences. Adult smokers get redefined as victims of their teenage selves and get left out of the equation.

Of course, we get a lot of help from the press. Other epidemiological research about smoking can be dry and perhaps too complex for a cool feature story, so the pop culture explanation grabs the spotlight. “Critics Want Smoking in Movies Doused, No Ifs, ands, or Butts” said a clever headline in a September 2007 issue of the Toronto Star. The Christian Science Monitor asks if there is “a link between teen smoking and movies,” and the Washington Times states that “Study Links Teen Smoking to Movies.” These stories appear in a variety of sections, most notably the entertainment and features sections; when other studies about smoking get published, they are likely small and buried in papers’ front sections—and less likely to be lead stories on the publications’ websites. The movie angle by its very nature commands our attention.

A 2005 Christian Science Monitor article asked the question many people have been encouraged to wonder: is there a link between teen smoking and movies? Quoting Stanton Glantz, director of the Center for Tobacco Control Research at the University of California at San Francisco, the article suggests that movies “[deliver] 400,000 kids a year to the tobacco industry,” an ominous figure to be


sure.10 But what does this really mean? The story cites a study published in the journal

Pediatrics titled “Effect of Parental R-Rated Movie Restriction on Adolescent Smoking Initiation: A Prospective Study.” The Monitor reported that “the teens who watched the most movies that featured smoking were 2.6 times more likely to try smoking than other adolescents,” making it appear as though the movies were the central cause of their decision to smoke.11

But a closer look at the study itself reveals that the answer is not so simple. First, 90 percent of the kids they studied never smoked, regardless of whether they watched R-rated movies or not.

The study mentions other important factors associated with smoking, including rebelliousness, low self-esteem, high sensation seeking, and of course having parents who smoke. Yet the complexity of how these issues are connected does not make it into the Monitor story, leaving us to focus on the movies. Last, this study tells us about an association, not necessarily a causal connection. It could well be that lenient parents who allow young teens to see more R-rated movies provide limited monitoring overall, a factor associated with more adultlike behavior.

Teens who go out a lot, regardless of where they are going, have more opportunities to smoke the less time they are committed to other nonsmoking activities. The study found that the relationship between trying a cigarette and R- rated movies is strongest for those whose parents do not smoke, an interesting finding. In all likelihood, a host of factors contribute to deciding to smoke, and yes, popular culture might be among them.

As the authors of the Pediatrics article acknowledge, having parents who smoke means that kids have access to cigarettes without buying them and the detailed knowledge of how to smoke. They also concede that trying a cigarette is not the same thing as becoming a regular smoker (although of course it may lead to becoming one).

A Toronto Star article about another study claims that it can predict who “will become lifelong smokers.” But the study, “Exposure to Smoking Depictions in Movies,” published in the Archives of Pediatrics and Adolescent Medicine, followed their subjects for only two years, way too short a time for us to conclude who is a “lifelong” smoker. The authors note that at the end of their study, 125 of the 4,575 participants who remained in the follow-up group were “established smokers,” a term not clearly defined, but still at less than 3 percent of their sample, far too few to conclude that “pediatricians should … encourage parents to limit viewing to no more than two movies per week.”12

Like other similar studies, this one found a correlation between teen smoking and movie viewing, and as with others, the authors highlight the movie connection while downplaying other factors, such as age, race, smoking by parents and peers,


lower parental education, poor school performance, and less participation in extracurricular activities. By dropping these factors from their discussion, movies do seem like a problem. But regardless of the movies they watch, most teens do not take up the habit.

Although it is important to consider the messages that popular culture communicates about smoking, we must not overlook other, arguably more central, reasons people smoke. Kids sometimes get cigarettes directly from parents, either by sneaking them or, worse yet, by bumming one directly off Mom or Dad. Peers who smoke are also a strong predictor, as is poor school performance. Another strong predictor of trying a cigarette may seem obvious: age. The older one gets, the more likely they are to experiment with adult behaviors that are off-limits to kids.

Among adults, there are several factors that are associated with greater likelihood of smoking. According to the CDC’s data, men have been and remain more likely to smoke than women. Whites and African Americans are more likely to smoke than Latinos and Asian Americans, but less likely than Native Americans are.

Smoking stays relatively constant throughout the life course, diminishing only after age sixty-five. Patterns of cigarette smoking indicate that the percentage of people who have ever smoked is highest for older people, with the exception of those over sixty-five (possibly because those who never smoked have a greater likelihood of reaching sixty-five).13

Those who have had at least one cigarette in the past month do tend to be young adults: more than 35 percent of those between twenty-one and twenty-nine report smoking at least once in the past month. By contrast, older teens sixteen through eighteen have rates of smoking in the past month lower than any age group below sixty-four. And adults over sixty-five are more likely to have smoked in the past month than their fourteen- and fifteen-year-old grandchildren.14

Differences in education and socioeconomic status are very important determinants of who smokes. In 2010 34 percent of adults with less than a high school diploma smoked, compared with 24 percent of high school graduates, 10 percent of those who completed a bachelor’s degree, and 6 percent with a graduate degree. About 29 percent of people living below the poverty level smoke, compared with 18 percent at or above. A comprehensive 1998 Surgeon General’s Report, Tobacco Use Amongst U.S. Racial/Ethnic Minority Groups, found that much of the racial and ethnic differences found disappear when income is taken into account. The report also concludes, as we might suspect, that no single factor explains tobacco use.15 If popular culture was really a central determinant of who smokes and who doesn’t, it would be unlikely that we would see these huge gaps in terms of gender, race, and income. Smoking might be a way for people to construct


a sense of masculinity or to deal with the variety of stressors that a limited income —and often limited job autonomy—may bring.

Although movies may not make us smokers, they do provide an interesting cultural backdrop to examine. When most movies were shot in black and white, smoke provided an additional visual layer in an otherwise gray backdrop. Smoking is also a storytelling shortcut that can connote attitude, relationships, desperation, anger, and a multitude of other emotions. Frankly, it is often used as a crutch by filmmakers.

While smoking in movies may have reflected a common pastime midcentury, smoking is no longer a regular part of most American adults’—or teens’—lives, so it is worth questioning its inclusion, especially when it seems gratuitous. Cigarettes have all but left American television, especially compared with smoking newscasters, television hosts, and entire programs sponsored by tobacco makers in television’s early years. Celebrities who smoke are subject to “gotcha” paparazzi shots in tabloids, creating potential embarrassment for those who try to maintain clean, smoke-free images.

If anything, it seems that Hollywood is not keeping up with the changes in actual Americans’ movement toward being nonsmokers. A study conducted by researchers from the Massachusetts Public Interest Research Group found that smoking in PG- 13 movies increased by 50 percent in the two years following the 1998 tobacco settlement, when the industry agreed to pay several states $245 billion over twenty- five years.16

As fewer people smoke, it becomes harder to make the case that movies with characters who smoke are merely reflecting reality. Movie smoking may be a last- ditch effort of the tobacco industry to advertise through product placement, but that has not been as effective as we might presume, even when kids are watching. Most of them just aren’t lighting up.

If not the movies, what about the cartoon character Joe Camel? Did he convert any unsuspecting kids through ads, T-shirts, and other products? Critics blamed him for being cute and enticing to a generation of would-be smokers.

But the data tell us otherwise. If we look at smoking trends for high school seniors after Joe’s 1987 debut, we might be surprised. In 1988 about 66 percent of high school seniors reported that they had ever smoked a cigarette; that percentage would never be as high again. After dropping for several years, the number briefly rose to 65 percent in 1997, but then continued its decline to a low of 40 percent of high school seniors who ever tried a cigarette in 2011.17 So while critics feared Joe Camel would seduce a new generation to smoke, the numbers don’t bear that out.

A 1995 study published in the Journal of Marketing found that a majority of very young kids (three to six years old) recognized Joe Camel and the Marlboro


Man when prompted by researchers. However, the older the children were, the more likely their attitude about cigarettes was negative—it was the promotional products that the kids liked.18 Yes, it is in bad taste to have cartoon characters promoting cigarettes, and the bad PR Camel received led to Joe’s demise in 1997.

As I discuss in the next chapter, critics often charge that advertising is a particularly potent way to influence people, especially kids. Sociologist Michael Schudson analyzed the history of advertising and smoking in his book Advertising, the Uneasy Persuasion and concludes that “major consumer changes are rarely wrought by advertising.” In the early twentieth century, female smoking posed a threat to the gender order, and women were the target of concern. Smoking defied Victorian notions of femininity and at the time was a feminist act of defiance. During that era, cigarette manufacturing made smoking more convenient—no longer would users need to roll their own—and the new blends made the taste milder, thus attracting more smokers. At the same time, soldiers received cigarette rations during wartime, in part because they helped with alertness and curbed appetites, thus likely creating addiction for the millions who served during World War I and World War II. Rather than the result of advertising, Schudson argues, “change in consumption patterns … has roots deep in cultural change and political conflict that advertising often responds to but rarely creates.”19

I know you might be thinking, okay, that might have been true in the early decades of the twentieth century, when advertising was in its infancy, but what about its more sophisticated turn at the end of the century? In a 1997 study, psychologist Robin Maria Turco found that kids who have already tried smoking tend to have a more positive association with the ads than the fifth-, seventh-, and ninth-grade kids in her study who had never smoked. Her subjects tended to view non-smokers more positively than smokers, especially if they had never smoked themselves. This tells us that advertising itself doesn’t necessarily cause kids to start smoking and that those who do are primed by their own behavior to recognize the ads. So the ads do have an impact, particularly among those who have already smoked.20


Okay, but what about all the ads for beer during just about any sporting event? Can all the beer ads that air make drinking seem essential to having fun? In contrast to smoking, which is gradually being phased out of many forms of social life, alcohol is not. Our biggest celebrations, fancy meetings, and casual gatherings often involve alcohol as a normal part of adult life. It’s more than popular culture that encourages young people to think that drinking is normal.

The vast majority of adults over twenty-one have had alcohol at least once, but


having a drink in the past month is highest for people in their twenties and thirties. Young teens are the least likely to have had an alcoholic beverage in the past month: 3 percent of twelve- and thirteen-year-olds. The percentage rises with age, but fourteen- to seventeen-year-olds are still less likely to have had a drink in the past month than anyone their senior. Not surprisingly, the biggest jump in recent consumption happens when young people turn twenty-one and drinking alcohol is legal.21

But there is one surprise, one we seldom hear about: according to the National Survey on Drug Use and Health (NSDUH), in 2010 fifty-five to fifty-nine-year-olds were equally likely to binge drink (five or more drinks on the same occasion) as sixteen- to seventeen-year-olds. The percentage of sixty- to sixty-four-year-olds who binge drink was higher than that of fourteen- and fifteen-year-olds (11 versus 16 percent), and those sixty-five and over are more likely to binge drink than young teens fourteen to fifteen. The peak age group for binge drinking was twenty-one to twenty-five, and the rates only gradually level off for people in their thirties and forties.22

Heavy drinking—bingeing more than five times within one month—also peaks at twenty-one and tapers off by thirty. But sixty- to sixty-four-year-olds are equally likely to be classified as heavy drinkers as sixteen- and seventeen-year-olds (just over 3 percent of both age groups). Teens twelve to fourteen are the least likely to be heavy drinkers of any age group. A study released in June 2007 by the National Institute on Alcohol Abuse and Alcoholism found that adults thirty to sixty-four are most likely to have alcohol abuse problems, with the average age of onset at age twenty-two and a half years old. White men earning more than seventy thousand dollar are particularly likely to abuse alcohol, according to this report.23 Also, teens with parents who binge drink are more likely to binge themselves.24

Yes, alcohol and cigarettes are legal for adults, and therefore adults over twenty-one and eighteen, respectively, have the right to drink and smoke as much as they like. Though legal, these behaviors contribute substantially to public health costs, accidents, and family instability.

Our typical focus only on teens takes the spotlight off the vast majority of substance abusers: their older siblings, parents, and grandparents. The popular culture explanation that teens are uniquely vulnerable to media images due to their age doesn’t even attempt to explain the persistence of substance use among adults throughout their lives, or why teens are among the least likely to use both legal and illegal substances.

Yet still we have a tendency to want to blame various forms of the media for enticing kids to drink. A 1998 study found that 40 percent of television shows show people consuming alcohol, including teens, although the authors found that drinking teens tend to be portrayed negatively.25 That same year, a study published in the


journal Pediatrics claimed that increased music video viewing contributed to the onset of drinking alcohol. The authors followed ninth graders in the San Jose, California, area for two years. While television viewing itself was not associated with drinking, nor was video game playing or computer use, the authors argue that watching music videos was a strong predictor.

Yet like most studies claiming media effects, this one only measured a correlation between the two, which the authors acknowledge does not allow them to assess causation directly. They also acknowledge that they assessed only the amount of music videos watched without regard to the content; additionally, the kids who dropped out of the study were more likely to be drinkers, further diminishing the predictive value of their findings. In spite of these shortcomings, they claim a strong association exists, measured by the odds ratio of one to three, meaning that kids who have had a drink watch about a third more music videos than nondrinkers.26

Perhaps the best predictor of drinking that may be associated with greater interest in music and music videos is age. It is likely that both increase as people get older. But studies like these get news coverage, as this one did on CNN, USA Today, and the Washington Post because the media explanation is itself a hook to get our attention. Headlines like USA Today’s “Teen Drinking Linked to Music Videos, TV” make it appear as though the results are conclusive and clear.27

The rationale that music videos, laden with alcohol advertising, promote drinking might feel compelling when these and other headlines emerge. Likewise, attempts to blame advertising have led to many other studies. For example, in 2006 a study published in the Archives of Pediatrics and Adolescent Medicine purported to find evidence that watching more television and ads for alcohol leads young people to drink more over time. “Adverts Do Make Teens Drink More,” London’s Daily Mail reported upon the study’s release.28

Their study asked respondents between the ages of fifteen and twenty-six a number of questions, including how often they drink alcohol and how much they drink, as well as how many times they had seen ads for alcohol. In addition, they found out how much money had been spent on alcohol advertising in their area and claimed that both of these factors are positively associated with more drinking.29 But this study has several major shortcomings, enough that other researchers publicly critiqued this study’s claims within the same issue of the journal in which it was published.

The critics challenge that the majority of the sample dropped out during the study, leaving doubt to the claim that the study is really a long-term analysis of young people’s drinking patterns. Only 31 percent of the original participants remained, in part because young adults are likely to move or change phone numbers, making it difficult to keep track of them. Second, one critic contended that


the data even show the opposite of the researchers’ claims: that more advertising was linked with less drinking.30

In my review of this study, two other issues stand out. While the authors suggest that people do not selectively pay attention to alcohol ads if they drink more, it is likely that people are more likely to recall seeing ads for alcohol if drinking is more of a regular part of their lives. And although it is interesting to assess how much money is spent on alcohol advertising within each market, the dollar amount itself does not necessarily translate into more ads shown.

For instance, the authors note that during 1999–2000, alcohol advertisements cost $88,750,000 in Los Angeles and $78,000 in Tulsa, Oklahoma, and teens in LA drank more. Not only are advertising costs higher in LA, but there are many more media outlets than in Tulsa (especially if we factor in ads during sporting events, with major-league baseball, basketball, and other events held there). But besides the amount of advertising, we need to consider other regional factors. Compared with LA, Tulsa is a much more religiously conservative area, and it is likely harder for young people to obtain alcohol there. These factors get left out of the study, which simply presumes strong media effects without looking at the broader context.

A similar study found that the most important factor associated with teen drinking was positive beliefs about drinking, despite the paper’s provocative title, “Frogs Sell Beer.” There are other important disparities not explained by media exposure; males typically drink more than females and are about twice as likely to drive after drinking. Whites and Native Americans are also more likely than other racial and ethnic groups to drink heavily, particularly in comparison with African Americans.31 The advertising explanation can’t explain these differences very well.

Does the cultural backdrop help shape ideas about drinking alcohol? Sure. But is it the most important when it comes to predicting dangerous, risky behaviors? Not exactly. According to the NSDUH, drinking among teens twelve to eighteen is lower than any other age group and increases with age, until peaking in the early twenties.32 Although teens get the lion’s share of focus, drinking among minors is not nearly as big an issue as it is for young adults, who are more likely to binge drink than any other age group.

One of the best predictors of teen alcohol consumption is getting closer to adulthood. This suggests that our ideal of quasi prohibition has a dangerous side effect: once young adults are away from parental supervision, a lot of young people then take advantage of their newfound freedom. Young adults enrolled in college full-time are more likely to drink than their peers.33

Drinking and driving also increases dramatically after eighteen, from 6 percent of sixteen- and seventeen-year-olds to 15 percent of eighteen- to twenty-year-olds. If you think those percentages are high, 23 percent of twenty-one- to twenty-five-


year-olds have driven after drinking. Yes, 6 percent of the most inexperienced drivers getting behind the wheel after drinking is cause for concern, but so is the fact that it is only after age sixty-five that drivers are less likely to have been drinking than teens.34

Often left out of the story is the fact that although most teens will have had a drink before high school graduation, they are much less likely to than their predecessors were. According to Monitoring the Future, the annual University of Michigan survey of high school students conducted since 1975, the percentage of high school seniors who ever had a drink hovered between 90 and 93 percent throughout the 1970s and ’80s, then began dropping. In the 2000s, the percentage fell below 80 for the first time and in 2011 was at an all-time low of 70 percent.35

But let’s be honest about alcohol—having tasted it is not necessarily the first step on the path toward crime and depravity. For most adults, drinking is a widely accepted practice, and adults often convince themselves that they can handle their alcohol better than teens. Yet we don’t have many opportunities for young people to learn responsible drinking behavior with older adults, but instead encourage secret drinking only among peers.

Perhaps, as Norman Constantin, director of the Public Health Institute’s Center for Research on Adolescent Health and Development, suggests, we should try a different approach. He notes that we miss the opportunity for adults to teach young people how to drink in moderation before they leave home, which has led to the unsafe drinking practices that we so often fear.36 This isn’t easy for parents to do— as Michael Winerip writes in the New York Times, concerns about teen smoking and drinking have “been turned into a war of good versus evil,” making adults who do drink even in moderation seem hypocritical. Teens can see that people who drink alcohol are not necessarily harming themselves, so this message ultimately fails. But as he details, parents often have no clear guidelines for teaching young people how to drink in moderation. Some parents think this means buying alcohol and allowing drinking in their homes, figuring this is safer than being someplace else. But this is not teaching moderation.

Winerip decided that any evidence that his teen sons had been drinking means they had too much and would be punished. Perhaps this is why teens who live with their parents are less likely to drink as much and, more important, are less likely to drink and drive than teens at college are. Fear of their parents’ reactions often serves as the most powerful form of control. And drinking without intoxication or other impairment is the definition of moderation. Some university presidents questioned American drinking laws in 2008, calling for a national dialogue on how to most realistically address young adults’ alcohol use and abuse.37

There is a big difference between tasting and abusing alcohol. In 1991 the MTF began asking kids if they had ever been drunk, a more important measure than ever


having had a drink. In 1991 65 percent of seniors reported that they had been drunk at least once. This number had declined to 51 percent in 2011; it’s still higher than one might prefer, but it is the lowest in the survey’s history. Despite ads, MTV, movies, and other media we so often blame for promoting drinking, kids are doing it less. Maybe we should ask what has made teens less likely to drink alcohol in recent years.

Legal and Illegal Drugs

Just as with tobacco and alcohol, teens’ use of illegal drugs has also declined over the past few decades. Just under half of high school seniors had tried marijuana, the most common illegal drug, in the late 1990s, a percentage that has remained stable in recent years following a brief dip. While it had been under 40 percent in the early 1990s, between 1978 and 1982 it hovered close to 60 percent and stayed above 50 percent until 1988. Lifetime use for most other drugs remained low in 2011: less than 10 percent had ever tried ecstasy, 5 percent had tried cocaine, 2 percent had tried crystal meth, and less than 2 percent had tried heroin.38

Despite these and other improvements, teens remain the central target group of concern, particularly in the news media. A Washington Post article cites “magazines, reality television, and movies” for their portrayal of “young, female celebrities as successful, thin—and drug users.” A USA Today story called “More Television Characters Are Going to Pot” notes fears that “the glamorization of pot could boost its use among youths.” Citing shows like HBO’s Entourage, FX’s Over There, and Showtime’s Weeds (not exactly teen-oriented programs), the article quotes a representative for the Partnership for a Drug-Free America, who says that “these are trendsetting shows. They affect behavior and attitudes, particularly teens. When glamorization of drugs has climbed, changes in teen attitudes followed.”39

In the years immediately following the debut of these programs, the number of teens who ever used marijuana fell 3 percent before returning to similar levels of when the shows first aired.40 The article also addresses an important and frequently debated issue: should popular culture portray real-life situations, which sometimes include drugs or idealized behavior, and sacrifice a sense of reality?

Pop culture references to drugs are certainly not new; there are plenty of examples from the 1960s and even from earlier times. Songs like Cole Porter’s “I Get a Kick Out of You” from 1936 originally included a line about cocaine. Fats Waller’s 1934 song “Viper’s Drag” is about a man dreaming of “a reefer.” Just as drug references have been around for decades, so have drugs. Starting with peddlers selling “health tonics” in the nineteenth century, Americans regularly dosed themselves with concoctions containing alcohol, cocaine, marijuana, morphine, and heroin, mostly without knowing that dangerous addiction could


follow. In a time when the temperance movement was strong, alcohol-free tonics gained

popularity, like Coca-Cola, which originally contained cocaine. At the same time, the rise of the medical profession brought with it the belief that medicines could cure everyday ailments as well as diseases. It is very likely that the period in American history that produced the most drug addicts was the Victorian era, not contemporary times. The difference is that no drugs were illegal then. The ones that became illegal were made so not just because of the danger they posed, but also because of the allegedly dangerous people who took them. From fears of African Americans in the Deep South to Mexican Americans in the Southwest and the youth counterculture movement, drugs associated with these groups became illegal in response to the feared threat posed by these groups.41 While much of our angst now focuses on teens, they are only a small part of the drug-taking population.

According to the Substance Abuse and Mental Health Services Administration, in 2010 47 percent of Americans twelve and over had ever used an illegal drug (marijuana, cocaine, heroin, hallucinogens, inhalants, or nonmedical use of pharmaceuticals). When we break down the percentage by age, we see an interesting picture, one of generational drug use jump-started by the baby-boom generation. About 60 percent of adults forty-five to fifty-four had used an illegal drug in their lifetime, compared with 10 percent of twelve-year-olds. This percentage increases with age, up to 49 percent by age eighteen and approaching their parents’ experience by the early twenties. The generational divide is clear: like the baby boomers with greater likelihood of use, more than 60 percent of young adults twenty-four to twenty-nine have used an illegal drug, but thirty- to forty-four- year-olds are slightly less likely to have used an illegal drug.

The group with the highest reported illegal drug use in the past year is eighteen- to twenty-years-old, with 23 percent reporting use in the past year. But illegal drug use continues through the twenties and thirties at double-digit percentages, dropping below 10 percent over age thirty-five, to just below the rate as fourteen- and fifteen-year-olds. The recent drug use of twelve- to thirteen-year-olds is lower than all age groups except those sixty and older.42

A significant portion of the population begins taking drugs as adults, contrary to conventional wisdom. In 2010 43 percent of people who tried an illegal drug for the first time were over eighteen. Adults are also far more likely to end up in the emergency room due to their drug use than teens; according to the Drug Abuse Warning Network (DAWN) report, 81 percent of emergency room patients are over twenty.43

Recently, we have heard a lot about prescription drug abuse, particularly as the Internet makes it easy to buy drugs illegally without a doctor’s prescription. A USA Today story warns parents that teens use the Internet to not only buy drugs but talk


about using them online, too. But using pharmaceuticals (pain relievers, tranquilizers, stimulants, or sedatives) without a prescription is also widespread among adults; in 2009 more than 1 million emergency room visits were the result of prescription drug use, a whopping 98 percent increase since 2004. Adults were far more likely than teens to need emergency treatment for pain killers like Oxycodone (three to one) and Hydrocodone (four to one).44

Age is not the only factor that is associated with drug use: males are more likely to have used any illegal drug, as are African Americans. Among those eighteen to twenty-five, the age group most likely to currently use drugs, less education is associated with drug use in the past thirty days, although not with lifetime use. Latinos and Asian Americans are less likely to use illegal drugs than other racial ethnic groups.45 Just as with other substance use, it is likely that these factors are among the most important in understanding why people use and misuse drugs, alcohol, and tobacco. Using substances may be a coping mechanism to deal with the stress or the lack of opportunity many Native Americans feel living on reservations, with very high unemployment rates.

At the same time, we live in a society that encourages better living through chemistry. The endless ads to enhance male sexual functioning, to take away our aches and pains, to counter our poor eating habits, and just about anything else that affects a large enough group to mass-produce a drug for them remind us that drugs can make our lives better. Yes, most of these in theory require a prescription, but as CNN reported in 2008, many sites include only an online checklist instead of an in- person medical exam, meaning all anyone needs to buy drugs is an Internet connection and a valid credit card.46 While this remains mostly unregulated, pharmaceutical companies profit from what may seem like a safe—and less demonized—way to take drugs.

Substance Use and Social Structure

By associating substance use and the problems they bring with teens and popular culture, we allow ourselves to ignore several important structural factors. Although popular culture is a good place to start critically examining the role that substance use plays in our lives, sometimes this helps us to ignore the role that alcohol, tobacco, and pharmaceutical manufacturers play in selling their products to the entire population, not just youth. There is a great profit motive involved in normalizing substance use, whether this involves alcohol or mood-altering drugs sold by prescription.

There are many shades of gray when it comes to understanding the role that substance use plays in our lives. For instance, many people diagnosed with attention deficit disorder (ADD) have found that drugs like Ritalin and Adderall


have helped them concentrate and accomplish educational and professional tasks better. And some people have abused these drugs as they have become more widely available. As sociologist Meika Loe found in her research on college students diagnosed with ADD, those attending more competitive universities faced greater educational pressure to continue to use the drugs. She found that many students struggled with their sense of identity, in some cases feeling like they were “not themselves” on the drug but knew they needed it to perform well in school.47 In order to understand why people use these drugs—with and without a prescription —it is important to understand the context of a competitive higher-education system and perhaps even the limits of the job market that add to this competitive environment.

Besides educational competition, the social construction of gender often encourages risk-taking behavior among men. Being able to keep up with or outdrink one’s peers is often part of creating a sense of masculinity. Seeming tough enough to handle alcohol could explain the disparities in alcohol use as well as drunk driving. Striving for thinness might mean that some people use drugs to help them lose weight or take steroids to build muscle mass.

And as the disparities by race and ethnicity, income, and education in tobacco, alcohol, and drug use reveal, there are other forces far more influential than popular culture in determining whether people use and abuse substances. The stressors of joblessness or of dead-end jobs with little room for growth or autonomy may make substance abuse more likely, as does the availability of particular drugs in areas with little economic opportunity outside of the illegal drug trade.

And last, use is not the same as abuse; as much as it might pain our Puritan- rooted culture to acknowledge, many young people try a cigarette, drink alcohol, or even take drugs without becoming addicted or engaging in problematic behaviors. As I discussed in the last chapter, there is a cultural tension in the United States between seeking pleasure and maintaining self-control. Alcohol and legal and illegal drug use reflects this unresolved tension. Medicinal marijuana is a good example of our often contradictory view toward substance use. While legal for medicinal purposes in sixteen states and the District of Columbia, marijuana is still considered illegal and without therapeutic use by the federal government. This creates challenges for medicinal marijuana distribution, where limitations in state laws sometimes make it easier for recreational use.

So although drug use in music lyrics, alcohol use on television, and smoking in movies are worth criticizing, they are not necessarily central in helping us understand the key factors that help us predict substance abuse. As something that can shatter families, exacerbate poverty, and limit educational opportunities—those with drug convictions are not eligible for federal financial aid—substance abuse is too serious an issue to blame primarily on popular culture.


Notes 1. National Survey on Drug Use and Health, “Alcohol Dependence or Abuse Among

Parents with Children Living in the Home,” in The NSDUH Report (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2004),; Andrew K. Whitacre, “Alcoholic Traits: Like Father, Like Son,” PsycPort, March 15, 2001, millenniumhealth_181153_76_9414797378704.html.

2. Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality, The DAWN Report: Highlights of the 2009 Drug Abuse Warning Network (DAWN) Findings on Drug-Related Emergency Department Visits (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011), visits; National Institute on Drug Abuse, “Drug Facts: Understanding Drug Abuse and Addiction” (Bethesda, MD: National Institutes of Health, 2011), addiction#references.

3. “Miley Cyrus’ Bong Fires Up Salvia Ban Movement,” Huffington Post, February 10, 2011, up_n_795345.html.

4. “An Epidemic of Teen Drinking,” Newsday, October 23, 2007, B12; Laura Stampler, “Teens on Facebook and Social Media Sites More Likely to Drink, Smoke, and Use Drugs,” Huffington Post, October 24, 2011,

5. University of Michigan, Monitoring the Future Study, “Long-Term Trends in Thirty- Day Prevalence of Use of Various Drugs for Twelfth Graders,” Survey Research Center, Institute for Social Research, 2011, table 15,

6. University of Michigan, Monitoring the Future Study, “Trends in Prevalence of Use of Cigarettes in Grades Eight, Ten, and Twelve,” Survey Research Center, Institute for Social Research, 2011, table 1,

7. See, for example, Michael Felberbaum, “US Urges Greater Efforts to Curb Teen Smoking,” Associated Press, March 9, 2012, efforts-curb-teen-smoking/Z8lWnZC6XS6dBKxSwn1i0O/story.html.

8. Centers for Disease Control and Prevention, “Percentage of Adults Who Were Current, Former, or Never Smokers, Overall and by Sex, Race, Hispanic Origin, Age, Education, and Poverty Status,” National Health Interview Surveys, Selected Years— United States, 1965–2006 (Atlanta, GA: Centers for Disease Control and Prevention, 2007),; Centers for Disease Control and Prevention, “Adult Cigarette Smoking in the United States: Current Estimate” (Atlanta, GA: Centers for Disease Control and Prevention, 2012),

9. Justin L. Tuggle and Malcolm D. Holmes, “Blowing Smoke: Status Politics and the Shasta County Smoking Ban,” Deviant Behavior 18 (1997): 77–94.

10. Randy Dotinga, “A Link Between Smoking and Movies?,” Christian Science Monitor, November 22, 2005, 2.

11. James D. Sargent et al., “Effect of Parental R-Rated Movie Restriction on Adolescent Smoking Initiation: A Prospective Study,” 149–156.

12. Brooks Bolick, “Critics Want Smoking Doused, No Ifs, Ands, or Butts,” Toronto Star, September 7, 2007, E9; James Sargeant et al., “Exposure to Smoking Depictions in Movies.”

13. National Survey on Drug Use and Health, “Tobacco Product Use in Lifetime, Past Year, and Past Month, by Detailed Age Category: Percentages, 2005 and 2006,” 2006 National Survey on Drug Use and Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2008),

14. National Survey on Drug Use and Health, “Past Month Cigarette Use Among Persons 12 or Older, by Age, 2010,” 2010 National Survey on Drug Use and Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),

15. Centers for Disease Control and Prevention, “Adult Cigarette Smoking in the United States”; US Department of Health and Human Services, Tobacco Use Amongst U.S. Racial/Ethnic Minority Groups—African Americans, American Indians and Alaska Natives, Asian Americans and Pacific Islanders, and Hispanics: A Report of the Surgeon General (Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking, 1998),

16. Crystal Ng and Bradley Dakake, Tobacco at the Movies: Tobacco Use in PG-13 Films (Boston: Massachusetts Public Interest Research Group, 2002).

17. University of Michigan, Monitoring the Future Study, table 15. 18. Richard Mizerski, “The Relationship Between Cartoon Trade Character

Recognition and Attitude Toward Product Category in Young Children,” Journal of Marketing 59 (1995): 58–70.

19. Michael Schudson, Advertising, the Uneasy Persuasion: Its Dubious Impact on American Society, 179, 197.

20. Robin Maria Turco, “Effects of Exposure to Cigarette Advertisements on Adolescents’ Attitudes Towards Smoking,” Journal of Applied Social Psychology 27 (1997): 1115–1130.

21. National Survey on Drug Use and Health, “Current, Binge, and Heavy Alcohol Use Among Persons Aged 12 or Older, by Age, 2010,” 2010 National Survey on Drug Use and Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),

22. Ibid. 23. Ibid.; National Institute on Alcohol Abuse and Alcoholism Press Release, “Alcohol

Survey Reveals ‘Lost Decade’ Between Ages of Disorder Onset and Treatment,” July 2, 2007 (Bethesda, MD: National Institute on Alcohol Abuse and Alcoholism), See also Karen Sternheimer, “Drinks Anyone?,” in Everyday Sociology Blog (New York: W. W. Norton, 2007),

24. Substance Abuse and Mental Health Services Administration Press Release, “New Nationwide Report Estimates That 40 Percent of Underage Drinkers Received Free Alcohol from Adults over 21,” (Rockville, MD: Substance Abuse and Mental Health

Services Administration, June 26, 2008),

25. Susan Lang, “Teen Alcohol Use Is a Prime-Time TV Staple, Study Finds,” Cornell Chronicle, November 5, 1998,

26. Thomas N. Robinson, Helen L. Chen, and Joel D. Killen, “Television and Music Video Exposure and Risk of Adolescent Alcohol Use.”

27., “Music Videos Linked to Teen Drinking,” November 2, 1998, violence-pediatrics-baby-sitter?_s=PM:HEALTH; Marilyn Elias, “Teen Drinking Linked to Music Videos, TV,” USA Today, November 3, 1998, D1; Mary Jo Kochakian, “When Message Is Drinking, Teens Listen,” Washington Post, November 21, 1998, V4.

28. Julie Wheldon, “Adverts Do Make Teens Drink More,” Daily Mail (London), January 3, 2006, 18.

29. Leslie B. Snyder et al., “Effects of Alcohol Advertising Exposure on Drinking Among Youth.”

30. Don E. Schultz, “Challenges to Study on Alcohol Advertising Effects on Youth Drinking,” Archives of Pediatric and Adolescent Medicine 160 (2006): 857; Reginald Smart, “Limitations of Study on Alcohol Advertising Effects on Youth Drinking,” Archives of Pediatric and Adolescent Medicine 160 (2006): 857–858.

31. Douglas A. Gentile et al., “Frogs Sell Beer: The Effects of Beer Advertisements on Adolescent Drinking Knowledge, Attitudes, and Behavior,” paper presented at the Biennial Conference of the Society for Research in Child Development, Minneapolis, April 2001; National Survey on Drug Use and Health, “Current, Binge, and Heavy Alcohol Use.”

32. National Survey on Drug Use and Health, “Current, Binge, and Heavy Alcohol Use.”

33. Ibid. 34. Ibid., see table 3.5. 35. University of Michigan, Monitoring the Future Study. 36. Quoted in “An Epidemic of Teen Drinking,” Newsday, October 23, 2007, B12. 37. Justin Pope, “College Drinking Debate: 18 or 21?,” Chicago Tribune, August 19,

2008, 38. University of Michigan, Monitoring the Future Study. 39. Ceci Connolly, “Teen Girls Using Pills, Smoking More Than Boys,” Washington

Post, February 9, 2006, A3; Gary Strauss, “More Television Characters Are Going to Pot,” USA Today, August 1, 2005, D1.

40. University of Michigan, Monitoring the Future Study, table 15. 41. Eric Schlosser, Reefer Madness: Sex, Drugs, and Cheap Labor in the American

Black Market (New York: Houghton Mifflin, 2003). 42. National Survey on Drug Use and Health, “Illicit Drug Use in Lifetime, Past Year,

and Past Month, by Detailed Age Category: Percentages, 2009 and 2010,” 2010 National Survey on Drug Use and Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),

43. National Survey on Drug Use and Health, “Initiation of Illicit Drug Use,” 2010 National Survey on Drug Use and Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),

192; Substance Abuse and Mental Health Services Administration, Office of Applied Studies, Drug Abuse Warning Network, 2009, “ED Visits Involving Drug Misuse or Abuse, by Age: 2009” (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2010),

44. Donna Leinwand, “Drug Chat Pervasive Online,” USA Today, June 19, 2007, A4; Substance Abuse and Mental Health Services Administration, Office of Applied Studies, Drug Abuse Warning Network, 2009.

45. National Survey on Drug Use and Health, “Past Month Illicit Drug Use Among Persons Aged 12 or Older, by Race/Ethnicity, 2002–2010,” 2010 National Survey on Drug Use & Health (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011),

46. Drew Griffin and David Fitzpatrick, “Widow: My Husband Died from Online Drugs,”, May 22, 2008,

47. Meika Loe, “The Prescription of a New Generation.”



Consumption and Materialism A New Generation of Greed?

Would you become a living advertisement for the right price? College students like Shawn Taylor of Toronto, Canada, would. In 2006 the twenty-seven-year-old journalism student told the Toronto Sun that he would gladly exchange the blank space on his clothing for corporate logos to help with tuition. (According to his blog, he garnered nearly thirty-three hundred dollars in donations.) I’m not sure if Shawn found a corporate sponsor, but his idea is not without precedent. A few years prior, two New Jersey teens put themselves on the market to be walking advertisements to fellow college students in exchange for tuition dollars. And it worked—a credit card company bit and paid for their services.1

Their plan led to massive media coverage and discussion about how branding has crept into just about every aspect of young Americans’ lives, even education.2 But considering the rise in costs of higher education and the massive debt many students and their families take on in the process, self-branding may be less of an onerous form of debt servitude than the traditional route of hefty student-loan payments.

For many Americans across the age spectrum, how much you have and what brand of stuff you buy contribute to the production of an individual’s status. Our things help us acquire membership in “status communities,” social theorist Max Weber’s term for formal or informal groups that we try to appear to belong to based on displays of material goods. For better or for worse, our things have real implications about how others define us socially, and this process is certainly worthy of critical scrutiny.

However, most (although not all) criticism of materialism and consumerism is laid at the feet of children and teens. Concerns that young people have misguided priorities, want things they don’t need that their parents can’t afford, and are overly focused on material goods apply just as much to their older counterparts. Observers cite surveys that show incoming college freshmen are more likely to value their income potential than were college students of the past. Television shows like MTV’s My Super Sweet 16 feature young people whose demands for high-end everything make the word spoiled seem like a modest understatement. A Harris Interactive Survey of eight- to eighteen-year-olds conducted in 2007 found that the majority—74 percent of teens and 66 percent of preteens—agreed that they “would be happier if I had more money to buy more things for myself.” A Chicago Tribune article that same year described tweens and teens as “the most brand-oriented and


materialistic generation in history.”3 Children’s and teens’ increasing consumer knowledge and power clearly make

some adults uneasy. The Washington Post ran a front-page article about DC-area teens who insist on wearing Dolce and Gabbana, Coach, Burberry, Gucci, and other high-priced brands. Cast as shallow spendthrifts whose poor parents struggle in “modest townhome[s],” the article highlights a few teens whose only sense of caution is in “sticking with Coach and Kate Spade,” since “Prada is really expensive.”4

Sometimes media coverage portrays young people not just as superficial dupes of marketers but dangerous. A Time article titled “Who’s in Charge Here?” cautioned that materialistic youth could be deadly.5 The story starts by describing how an overindulged seventeen-year-old crashed her Mercedes into another teenager while driving drunk, killing the other teen.

Anecdotes about spoiled kids abound in stories such as these, portraying marketplace knowledge as a sign of overindulgence; the Time article describes a preschooler who told her teacher she was wearing a Calvin Klein dress. “Kids shouldn’t know about designers by age four,” the teacher laments. “They should be oblivious to this stuff.” I’m guessing the child was aware of designer names because they are important to her parents. Yet the problem here is cast as the child’s, not the adults’.

Children continue to be the focus of our fears of hyperconsumption, especially when it appears that children’s consumer knowledge is greater than that of their parents. Kids are thought to be especially influential when parents are purchasing computers or other technology products, and estimates of the amount of purchases children influence range from $100 billion to $300 billion annually. A US News & World Report article reports about “kidfluence”: the power children have to influence their parents’ purchasing decisions. Kidfluence challenges the conventional notion that children are either too influential or easy prey who need protection from “premature consumerism.”6

Who is the battle really between? Whereas many adults outwardly claim that the conflict is between powerful advertisers and vulnerable kids, it appears the real struggle is between adults and children, and parents don’t exactly know what to do. That’s probably because they are fighting a battle that might be seen instead as an opportunity to teach and learn about setting limits. Perhaps advertising creates a sense of inadequacy in parents because many can’t possibly meet all of their children’s material desires. As products of a consumerist society themselves, many believe that they should.

Stories of out-of-control adult shoppers are common when the holiday buying season starts, but concerns about consumption and advertising usually focus exclusively on children and teens because many believe that they are easily


influenced. But are young people really the naive consumers we often presume them to be? If we are to look critically at children and teens, we must consider adults’ role in materialism as well. Parents are not just victims who are overwhelmed by their children’s unrelenting requests for more; they are often active participants in hyperconsumption themselves.

There certainly are media images of adults who spend themselves into trouble. Self-help talk shows commonly feature shopaholics—usually young women, as in CNBC’s Princess—who are portrayed as clueless, selfish, and superficial. The Real Housewives franchise features affluent women who seem to have little to do besides shop, throw parties, and fight with each other. Yet the vast majority of us seldom consider the impact that materialism has on our environment and our lives, at least not for adults. Critics of hyperconsumption, defined as consuming to excess, have gotten more traction if their message focuses on children, as in books like Consuming Kids, Born to Buy, and Branded: The Buying and Selling of Teenagers.7

Could the temptation of an ever-increasing number of gadgets, like smartphones, iPads, and designer brands, be luring young people away from things that really matter and destroying the environment? Though that may be the case, focusing exclusively on young people as easily persuaded into the world of consumer goods ignores adult consumption and our broader economic context.

Approximately two-thirds of the economic growth in the United States can be attributed to consumer purchases, meaning that consumers are the engine driving the economy. In times of trouble, we are encouraged to shop our way back to good times. While excessive buying certainly has its downside, including personal debt, environmental costs, and diverting us from addressing both deeper personal and social issues, our economy is predicated on consumption fervor. Economists describe the paradox of saving, the idea that if people spent only what they needed to and saved as much as possible, this might be good for them personally but not good for the economy as a whole, especially in times of recession. Falling demand and prices for goods might be good for a consumer, but they can wreak havoc on an economy, leading to layoffs and overall stagnation.

This chapter explores how and why we deflect concern about advertising and consumption onto children and teens, as well as the relationship between consumption and social problems more generally. Complaints about children’s consumption reflect ambivalence about our consumer-driven culture. Materialism is an issue that has real environmental effects, but by only focusing on children’s and teen’s materialism, we do little to address what problems hyperconsumption might cause. In contrast to complaints that today’s children are excessively materialistic as the result of advertising, this chapter considers the social and economic factors that create the view that having more stuff is both personally fulfilling and socially advantageous.


Child Consumers and Social Structure

The fear that children are lured into our hyperconsumerist society too soon draws on romantic notions of childhood innocence, in which children are somehow untainted by consumer culture until advertisers enter their allegedly pure space. In reality, consumption often precedes birth. Parents with the means to do so spend thousands on branded nursery furniture and the right stroller, car seat, and brand- name clothes.

Although blaming affluent parents for our culture of consumption may seem like the answer, the truth is that our highly consumerist society has been created and sustained by a large shift in the American economy following World War II. Economist Daniel Bell described this as a “postindustrial” economy based on surplus and driven by consumption.8 We live in a consumption-oriented society not simply because parents can’t say no but because our economy has been built on consuming abundance. Instead of recognizing how these broad economic forces shape our buying habits, we tend to blame individual consumers, and usually other people’s consumption at that.

Some charge that parents now spend “guilt money” on children to make up for the time they can no longer spend with them because they are so busy working to buy more stuff. But Ellen Galinsky, director of the Families and Work Institute, challenged the common belief that “selfish, greedy parents … sacrifice their children at the altar of their own materialism.” In her research, Galinsky observed that parents are in fact making family time an important priority. Recent studies indicate that parents now actually spend more time with children than in 1965, when surveys were first conducted. According to a 2010 report, time spent with children decreased in the 1970s and 1980s, but rose in the 1990s and in the 2000s; the amount mothers spent with children was higher than for fathers, and higher levels of education also predicted more time with children.9

Nonetheless, other people’s parenting skills remain an easy target. Blaming other parents enables the rest of us to avoid looking at the broader economic system, which has created a culture of consumption. For example, in a Chicago Sun-Times op-ed article, Betsy Hart facetiously wrote that she and her husband would like to be reincarnated as their own children “because of all the neat stuff they have.” But she insisted that it is other parents who cross the line: “our kids’ stuff pales when compared with the indulgences enjoyed by many children and teens,” she explained. Others would agree with her: in December 2007, Reuters reported on an online poll where 94 percent of parents think that kids today are spoiled, but only 55 percent think their own kids are.10

A San Diego Union-Tribune headline warned parents, “Don’t Give Your Kids Too Much,” and claimed that overindulgence is a problem in “any income bracket.”


By this logic, all children have too much—it’s not just a handful of colorful examples, but allegedly a problem impacting even poor children. In a Time/CNN poll, 80 percent of parents agreed with the statement that kids are more spoiled than ten to fifteen years earlier. For many adults, materialism appears to be a widespread problem among today’s youth. The book Affluenza supports this belief, warning parents that affluenza has become a national “epidemic.” An antiadvertising group blames childhood affluenza on advertisers, proposing that “no advertising [should be] directed at kids that promotes an ethic of selfishness.” Betsy Hart concurred in her Chicago Sun-Times piece: “We have a serious problem of a generation of kids who don’t know what it means to be told ‘no.’”11

In reality, the number of children in middle-income families has declined, while the proportion of children whose families earn at least four times the poverty level have increased. According to the US Census Bureau, in 1980 17 percent of children lived in high-income households, 41 percent were part of medium-income families, and 42 percent were in low-income households. By contrast, in 2010 27 percent of children were in high-income families, 29 percent in medium, and 44 percent in low-income families.12

Often called the “middle-class squeeze,” there are more children growing up affluent than a generation before, but the number of children in low-income families has increased. This growing divide makes those at the top highly visible, but those at the bottom remain relatively invisible. By continuing to believe that greed is a characteristic of youth today, we can overlook that more than a half-million children suffer from neglect each year and that one in five American children is living in poverty.13 Is affluenza really their biggest problem? Probably not, but believing that children are categorically overindulged diverts our attention from funding programs to assist those in need. After all, if kids have so much, why give any more money, say, for public education?

Figure 10.1: Percent of American Children by Family Income Levels, 1980, 1990,


2000, 2010 Source: US Census

Going beyond the assumptions about children, several scholars have studied young people’s actual relationship with consumption. In her ethnographic research at several toy stores, sociologist Christine L. Williams found that adults use toy shopping as a status marker, particularly when purchasing gifts for other people’s kids. She observed that a central facet of toy shopping for children was about learning the ins and outs of shopping: assigning value to a product, choosing within a limited budget, and how to interact with a cashier and make sure they get their correct change.14

Sociologist Viviana Zelizer’s research challenges the sentimentalized notion of children as helpless consumers in her analysis of children’s economic roles. Her ethnographic research with children reveals that they are not just shoppers but engage in significant economic negotiations with both adults and their peers in their everyday lives. Zelizer also details how children are producers; many children work to earn an allowance by doing chores, but some also work alongside their parents in family businesses. They are also distributors, trading goods with siblings and peers and, as Williams also found in her toy store research, assisting with gift purchases. Sociologists Emir Estrada and Pierrette Hondagneu-Sotelo studied children who work with their street-vendor parents in Los Angeles and how kids often manage the stigma involved in the low-status labor market.15

In interviews with elementary school children and their parents, cultural anthropologist Cindy Dell Clark found that money left by the “tooth fairy” serves as an important rite of passage.16 A child may begin to earn an allowance at this age, and thus they learn to become consumers in their own right. Clark notes that this time is often difficult for parents, who must come to terms with the fact that their child’s “babyhood” has ended. So while a child’s step toward independence is important, parents may indeed feel a powerful sense of loss that accompanies the child’s entrée into the world of consumer culture. Critics may blame advertisers for luring children out into the world, but within a consumerist society, this is an inevitable and necessary step toward learning to make choices in the economic world. Anxiety about young children’s consumption coincides with their first steps toward independence.

Participating in consumer culture doesn’t necessarily mean that children (or adults) are inordinately materialistic. In a study of pre-school children, researcher Ellen Seiter found that children use consumption to create both group and individual identity (much like their parents).17 The children wore T-shirts with recognizable logos and carried lunch boxes with Disney characters to create a shared culture and let their peers know that they were “in on” kid culture. Children


use consumption to begin to assert their independence from their parents, as being a consumer in American society is a step toward maturity.

Consumption is a social act: buying may be an individual activity, but the types of purchases we make can create a sense of shared identity. Children’s play with particular toys or knowing about the latest fad is a way of creating a shared culture. Adults use consumption in the same way, of course, buying cars, gadgets, and clothes that indicate we are members of various groups.

Maybe we should question why consumption is so much a part of fitting in with other kids, but we rarely ask the same question about our own behavior, like why so many adults desire a fifty-thousand-dollar car when the twenty-thousand-dollar one works just as well or better—or why, as of 2012, the average American household had more than seven thousand dollars in credit card debt.18 It’s too simple to say we are all just fodder for advertising genius. We consume what we do for a number of reasons: We need things, we are making statements about who we are as individuals, and we are affiliating ourselves with certain groups, making status distinctions. Children are no different in this regard.

The key difference between children’s consumption and our own is that, as adults, we tend to be outsiders in the world of children’s culture. Their consumption is an easy target for us because their culture may not hold meaning for us and may even seem silly. Of course, an identity based only on consumption is rather empty, and concerns about advertising often stem from the fear that consumption is making this generation more superficial than children of the past. Learning to be a responsible consumer means we all need to realize that having things will not fill all of our needs. This may be a hard lesson to teach, in part because advertisers insist their products will cure what ails us, but more important because a lot of us have yet to master this lesson ourselves. We adults haven’t done a great job resisting advertisers’ claims that a new car is the answer or that we can lose ten pounds this weekend by swallowing some magic powder.

These are all serious issues to address in a consumer-oriented society, where we are told by government leaders that if we stop consuming, people will lose their jobs, interest rates are lowered to discourage saving, and we receive stimulus checks to encourage us to go shopping. We show people we love them with material goods, reward children with gifts, and teach them that holidays means shopping, even if you must go into debt in the process. Consumption is the building block of a capitalist society and has become the hallmark of American culture.

Blaming Advertisers

Beyond blaming kids themselves, advertising is an easy target to blame for our culture of consumption. According to a 2006 poll, 92 percent of parents agreed that


there is too much advertising to children.19 Advertising might seem like a good explanation for young people’s consumer desires. After all, advertisers speak directly to kids, sometimes working against parents’ attempts to curb their material desires.

There is no shortage of people who believe advertisers have an unfair advantage over children. At one point, psychologists even considered sanctioning colleagues who consult with advertisers.20 An advocacy group called Stop Commercial Exploitation of Children called for the federal government to create new regulations like those in Norway and Sweden, which ban advertisements targeted at children under twelve.21 The group describes advertising as “a $12.8 billion-a- year industry that targets society’s most vulnerable minds and deliberately excludes parents.” Articles in the Nation and the American Prospect describe children as “exploited” by marketers and in need of government protection because they are vulnerable to “being programmed” and are “too young to understand … that advertising may be harmful.”22

Critics frequently describe advertising as “emotionally harmful” to children, created by “corporate exploiters of children.” During her first campaign for the Senate, Hillary Rodham Clinton called for limits on “advertising that is harmful to children,” which begs the question: when is advertising harmful? Are we living in “a toxic cultural environment,” as author Jean Kilbourne says, created by advertising? And how exactly should we define “harm”? For some, the fact that teenagers can easily identify brands of beer from advertisements is cause for alarm.23

There’s a problem with this line of thinking, and it stems from the sentimentalized caricature of children and childhood. These advertising fears feed on this stunted, oversimplified view of children’s knowledge and abilities. It’s also a way to demonize advertisers as the sole source of our society’s materialism without taking responsibility ourselves. Of course, young people—and adults—can be influenced by advertising campaigns and enjoy partaking in consumer culture. But before we assume children are always naive consumers, it would be wise to find out what children already know, what capabilities and limitations they possess.

So how effective are advertisers’ techniques on children? Adults tend to view themselves as seasoned enough not to be vulnerable to advertising, which communication scholars call the “third-person” effect; we rarely think that we are influenced by advertising but are certain others are, especially people we consider less competent than us.

A great deal of research suggests that children are more capable than many of us may realize. It shouldn’t be a big surprise that people raised in a media-saturated society would have the ability to think beyond simply see-want-buy. Research indicates that children under six may be critical of ads, and by the age of eight


nearly all children are skeptical of advertisers’ claims.24 Preschool children may be less critical, but they are also far less likely to recall advertisements later.25 One psychologist concludes that “children younger than age eight do not understand that the intent of commercials is to persuade them to buy.” This makes it seem as though young children are uniformly incapable of any critical thinking. But the actual study that this claim is based on found that although children under eight are less likely than their older counterparts to get the intent of advertisers, half of the first graders in the study did know that ads are about persuasion.26

Marketing scholar Deborah Roedder John’s review of twenty-five years of advertising research suggests that preadolescents’ (ten- to twelve-year-olds’) knowledge about advertising tactics and skepticism level is similar to that of young adults.27 This finding reflects psychologist Jean Piaget’s theory of cognitive development, which argues that critical thinking skills appear around the age of eleven, and thereafter kids are capable of analytical reasoning. Yes, both children and adults can become more critical consumers, but we have to keep in mind that adult competencies are not dramatically better than adolescents’ or even many preadolescents’.

Yet media fears continue to insist that we view children’s minds as blank slates that advertisers can easily manipulate. This reinforces adult power, since critics claim to defend the allegedly weak from harm. Protection can be used to restrict, to censor, and to attempt to deny children’s desires. But according to one study, teens who watch more television tend to be more skeptical of advertising and have greater marketplace knowledge.28 It is too simple to view kids as helpless victims instead of as decision makers with varying levels of critical ability—like adults.

Kids also know how to influence the adults around them, which drives a lot of parents’ anger at advertisers. Apparently, many parents find their children’s powers of persuasion irresistible, or at least annoying. As sociologist Juliet B. Schor describes, parents are more likely to buy items for their children when they believe that not doing so would impair their chances for success or popularity.29 Perhaps parents are the ones who are often easily persuaded by their kids or have trouble saying no.

Although it might be easier for parents if advertising for kids disappeared, helping children negotiate desires and delay gratification is an important part of parenting. As long as children and teens continue to spend billions of their own money (a 2012 estimate suggests that spending for and by teens amounts to nearly $209 billion), they will be a sought-out market.30 Rather than simply demand that advertising stop, we would be better off helping children become more critical of advertising and consumption more generally. The reality of advertising is that selling to kids is not nearly as simple as many of its detractors would have us


believe. Advertisers know this and work to understand children on their terms. Whereas many adults jump to conclusions about children’s supposed lack of critical thinking ability, advertisers’ use of research enables them to have a better understanding of central issues important to young people. In fact, advertisers describe marketing to children as a bigger challenge than selling to adults.

Advertising and Consumption

Advertising is a multibillion-dollar industry, and it is easy to presume that companies wouldn’t spend so much money on something that doesn’t work. That’s partly correct; if advertising had no impact whatsoever, it probably wouldn’t exist. But it also isn’t a slam dunk. Just as politicians pay political consultants millions of dollars to get elected but sometimes lose badly, advertising—to kids especially—is not as easy as it might seem.

About a century ago, advertising became big business for several reasons unrelated to its effectiveness. First, as brands nationalized, mass advertising made more sense. With distribution channels making delivery of products easier, national brands helped consumers develop a sense of trust about the quality of the products they purchased during a time when food quality was sometimes questionable. And perhaps most important, a corporate-excess profit tax levied after World War I meant that when businesses increased their advertising budgets, they could avoid paying more taxes. Even if advertising was only mildly successful in promoting sales, it was a valued tax write-off.31

In order to make best use of their resources today, advertisers make a significant effort to study their target markets and learn about their values, beliefs, and lifestyles. Of course, marketers are mercenaries; they are trying to learn about people not because they care, but because they want to know how best to sell to them. Advertisers’ main interest lies in co-opting the target market’s culture and transforming it into a commodity after conducting surveys and focus groups.

As an episode of PBS’s Frontline titled “The Merchants of Cool” detailed, marketers struggle to pin down what is currently cool, something constantly shifting.32 To discover the mystery of cool, researchers rely on “cool consultants,” or a panel of fashion-forward young people who report on trends within their peer groups for a fee. Very young (or young-looking) marketing staffers sometimes go out in the field themselves to mingle with teens to spy on them and co-opt any new trends. Of course, the preeminent goal is selling a product, but marketing research is one of the few instances when adults treat kids as the experts of their own culture and offer a chance for them to be heard—and paid.

Companies also target fashion-forward young people and bloggers who they think will influence their peers, giving them free stuff in hopes that the popular style


leaders will lend their cool to new products. Marketers encourage the kids to have parties where they show their friends new products and give them samples. Although critics rightly question marketers’ involvement in kids’ lives and the ethics of acting as if they are their friends, it is also important to consider young people’s perspective in this arrangement. Not only do they get things for free (who doesn’t love free stuff?), but this is one of the few opportunities for young people’s ideas to be valued by adults. Perhaps if young people had other meaningful ways to earn as much money so quickly, marketers would not be so appealing to kids.33

Based on this research, advertisers create ads that they think will reflect central concerns that resonate with their audience, be they children, teens, or adults. Marketing executives make a priority of finding out what kids in their target demographic are most concerned with. Sociologist Michael Schudson explains that “advertisements pick up and represent values already in the culture … [and] pick up some of the things that people hold dear and re-present them … assuring them that the sponsor is the patron of common ideals.”34 Advertisements for children thus appear to be sympathetic to kids and at times critical of adults. They mirror whatever the target market wants to hear.

But simply understanding a target group’s central concerns won’t guarantee sales.35 In fact, advertisers consider an ad campaign successful based not simply on sales but on whether ads increase brand awareness and market share. Advertising has been relatively unsuccessful in changing the size of a market and is instead most effective in obtaining a larger market share of those already consuming a product. If we consumers have an image to associate with a product, it may make us more likely to choose one particular brand over another, yet research demonstrates that brand awareness does not necessarily lead to acceptance of a product or a purchase.36

This does not mean that advertising is unimportant or inconsequential. On the contrary, the content of ads reveals a great deal about central issues of concern within American society. As cultural critic Jean Kilbourne demonstrates in her Killing Us Softly videos, advertising often reflects and reinforces rigid notions of gender and power. But advertising doesn’t work the way many of us think it does. Commercials don’t necessarily make anyone—child or otherwise—immediately think “I have to have that.” Instead, advertising often works to remind us of a brand name and to link a particular image with their product. That’s why most of us would feel more comfortable brushing our teeth with Crest toothpaste than a generic tube. We think we know something about Crest, based on experience and from advertising. Just liking an ad doesn’t mean someone will want the product. Consumer behavior is more complex than cause-effect; persuasion is multifaceted, and advertising is merely part of this process.

Unlike most adults, advertisers do not consider their young targets particularly


gullible. “If there were a magic formula, we’d all be rich,” an ad executive reports. Instead, trade publications often speak of children as especially skeptical and difficult to address. Marketing writer Patrick Barrett notes that children are not necessarily “a gullible soft target, but in fact are hard to hit and quick to switch off … ad messages.” Marketing executive Andrew Marsden finds kids skeptical and knowledgeable about the communications world and notes that children are often more independent than their parents are willing to admit. “There is an element of naïveté from parents. The world they grew up in no longer exists,” he remarks in Campaign, a marketing trade magazine.37 He also finds teenagers to be particularly good at manipulating parents.

In spite of the popular belief that advertising must be highly effective since so much money is spent doing it, advertisers are not overly confident about their ability to reach young target markets. In fact, because children are seen as a challenge, some companies such as Burger King have hired specialized agencies to handle their children’s campaign. It seems it may be easier to influence their parents.

We seldom listen to what young people themselves say about their relationship with advertising and consumption. Writing in the Fresno Bee, teen Haley Minick challenged, “I do not believe my generation is an army of brainless zombies who buy whatever they come across.” An article in the Charlotte Observer allowed North Carolina teens to share their perspectives on materialism. Several comments reflected a disdain for materialism not just in their peers, but in the rest of society as well.38 Blaming only kids and advertising for our culture of consumption masks both the complex way in which it is embedded within our economy as well as the serious problems hyperconsumption can create—and reflect.

Consumption and Social Problems

Concerns about children’s consumption can sometimes mask more serious social issues. Take public education, for example. Many school districts have allowed advertising in their hallways, on lockers, and on buses, and they also sell branded snacks in cafeterias and vending machines in order to raise sagging revenues. A Texas school even placed ads on its roof, which is visible from the flight path to Houston’s airport.39

Advertising entered schools as never before during the 1990s. In 1993 a Colorado Springs school district became the first in the country to court advertisers; this district had been unable to pass a school levy for nearly twenty years at that point.40 This occurred four years after Channel One was introduced in classrooms across the country, a newslike program containing advertisements that


students in host schools had no choice but to watch. In exchange a school could receive up to fifty thousand dollars in audiovisual equipment.41 Other examples of corporate America’s entrance into public schools abound:

• Coca-Cola and Pepsi each provide six-figure signing bonuses and cash advances to schools signing exclusive contracts

• a company called ZapMe! once offered thousands of dollars’ worth of computer equipment and high-speed Internet access in exchange for constant ad streams and tracking students’ browsing habits (the company has since changed its name and is no longer involved in educational marketing)

• corporations like Exxon, Kellogg’s, and Domino’s Pizza mail free “educational” materials like videos, posters, booklets, book covers, and software directly to teachers

Advertisers have stepped in to fill the void left behind by a society that has steadily divested from public education, particularly following the tough budget crises many states have faced in recent years. California, for example, changed property-tax laws in 1978, which led to a sharp drop-off in the state’s overall rank of expenditures per student.42 A San Diego–area calculus teacher dealt with cuts in his photocopying budget by selling ad space on his exams, rather than cut back on the number of practice tests for his students.43 In some communities, the local tax base has been slashed dramatically by the economic downturn and tax breaks to lure corporations to relocate there. So as distasteful as corporate-sponsored schools, beverage contracts, and in-class market research may be, many communities have created a situation where schools are faced with few other options.

Money, lesson plans, computers, books, and audiovisual equipment are all things that schools need that we all too often fail to provide. In response to this trend, many schools currently ban sales of soda and junk food in schools, and in August 2007 a bill proposed in Massachusetts would bar any advertising and materials with logos in its schools (as of this writing it has not passed).44 On a smaller scale, groups such as the Center for Commercial-Free Public Schools have convinced some school board members to reconsider accepting corporate funding. But until school districts receive adequate public funding, school boards will feel pressure to take corporate money. The sad fact is that advertisers often value children as consumers more than our society values them as students, and advertisers are fronting the money to prove it.

Yet young people can use the advertising in their schools to begin thinking critically about advertising. Some teachers report using the ads from Channel One


or the corporate-sponsored curriculum materials to teach about propaganda and bias. Ironically, the omnipresence of ads may itself serve to drain the influence out of advertising. “They just fade into the background,” a high school student remarked about the ads in his school, which are so widespread he barely noticed them anymore.45 This is what advertisers call “clutter,” turning ads into white noise that we become so accustomed to that we cease to see them after a while. Schools are beginning to look like the rest of American society, where public space is branded space.46

Ironically, while many people complain about corporations’ influence in school, public education policy has increasingly mirrored a business model, trying to foster competition between schools via standardized tests and using funding as a reward rather than a right. This model of education makes students themselves a product. It seems that many adults have a hard time understanding success outside of the logic of consumption.

Advertising in schools makes visible our failures in providing enough resources, yet our culture of consumption also has serious hidden costs to the environment. Buying new stuff means that the old stuff has to go somewhere, and old electronics are particularly toxic in landfills. The Environmental Protection Agency (EPA) estimates that in 2009, Americans had 2.3 million tons of electronic items that were no longer in use, but just 25 percent of electronic waste was recycled.47 Think about all of the devices that seem to become obsolete after just a year or two, such as cell phones, which service plans encourage us to turn over quickly. Every new phone requires assembly, and the components often create toxic gases that affect those working on assembly lines and generate pollution in the area.48 The EPA estimates that in 2009, 129 million cell phones were disposed and only 8 percent recycled. Some old electronics get dumped in developing countries, where workers pull apart computers and televisions with their bare hands, causing not just injury but exposure to dangerous gases.49

Besides getting new phones regularly, we buy a lot of new clothes, so Americans end up throwing away a lot of clothes and shoes. In 2010 the EPA estimated that more than 13 million tons of textiles end up in landfills, representing more than 5 percent of all waste.50 This is something we rarely think about when visiting the mall: our old things have to go somewhere. Because thinking too much about the environmental impact of waste can put a damper on sales, this is something we rarely hear about—and rarely consider when buying something new.

Critical Consumerism and Social Movements

Consumerism is deeply intertwined with the American economy and linked with


economic growth. Instead of simply trying to eliminate children’s relationship with consumer culture and suspend the inevitable, I advocate critical consumerism, which acknowledges that we are part of a consumption-based society. This means admitting that the experience of consumption can be fun and enjoyable but can also be empty and expensive, negatively impact the environment, and ultimately cannot fill every emotional need, as it often promises. Advertising is just a piece of a big puzzle, and consumerism is built into the fabric of American economic and cultural life. Parents and teachers can prepare children to be members of a consumer-driven culture or, if truly concerned about our consumption-based society, attempt to change the nature of the culture itself, starting with their own consumer habits. Rather than only complaining about and attempting to shield children from the pervasive culture of consumption, which eventually fails, we can work to create more critical consumers, starting with ourselves.

Both adults and children would benefit from challenging the belief that consumption and happiness go hand in hand. This first requires that adults think critically about why they buy what they do and incorporating a dialogue into consumer purchases with children. As researchers Lan Nguyen Chaplin and Deborah Roedder John reported in a 2007 study, when children’s self-esteem is lower, they tend to seek material items for a boost. In their study, even small positive comments were effective.51 I would predict that if they replicated this study with adults, their findings would likely be similar.

Consumption is also a social act. Although many people would be reluctant to admit that they purchased a car or their home to impress family and friends, consumption can be part of an attempt to create an image of success. The notion that we can have things that we don’t currently have the money to buy permeates through both our culture and our economy. It is the exception, rather than the rule, to buy a home or a car by paying cash. For those who have credit card debt, this is also true for clothes, shoes, and other services that millions of households pay for in installments.

This is not simply because we are all shallow and materialistic. Controlling for inflation, most Americans’ income has actually declined slightly during the past decade, while cost of living has risen—most notably for higher education. Many young people begin their adult lives in debt. According to the College Board, two- thirds of the class of 2008 graduated with student-loan debt.52

Rather than simply focus on children’s consumption, several social movements have addressed our broader culture of consumption, as well as the structure of the economy. The downshifting or slow movement ( encourages people to question the often high-stress lifestyle involved in working many hours in order to pay for hyperconsumption. Instead, the movement encourages people to seek more balance and become mindful of the work and

consumption choices they make in order to enjoy life more and reduce work- and debt-related stress.

The Center for the New American Dream ( shares this perspective, offering tips for simplifying one’s life, living with less, and creating more balance in our lives by becoming more mindful of our consumption habits. By reducing consumption, these and other organizations also seek to reduce the environmental impact of consumption. The success of the film An Inconvenient Truth has also created a broader dialogue about the importance of protecting the environment and is an example of how media can help raise awareness about the environmental costs of materialism. Because many forms of popular culture are ad supported, this message might not be a terribly popular one, particularly if reducing consumption reduces the bottom line of media companies.

This is one area where young people are often on the forefront. As Business Week reported, many college campuses are sites of environmental activism and passion. Using social networking to organize, many young people have been active in creating change both locally and globally. A Voice of America story, “New Generation Revolutionizes Environmental Activism,” describes how “a new generation of eco-warriors is revolutionizing environmental activism.”53

Although participants across the age span joined the Occupy movement in 2011, young adults were on the forefront of this movement, which questioned the economic structure as a whole, venting anger most pointedly at the financial industry. For many young people who had taken on student-loan debt but had been unable to find good paying jobs during the recession, both consumerism and the promise of upward mobility seemed particularly elusive. The Occupy movement served as a lightning rod for a number of reasons. For one, seldom is the broader economic social structure called to task by the public in debates about social problems. We are far more familiar with blaming individuals for their own economic plight: for being too superficial (on programs about money management), for being unable to delay gratification, and for making poor financial decisions. Often referred to as the culture-of-poverty explanation for why economic inequality exists, it serves to discount broader structural conditions beyond an individual’s control—which I discuss in the next chapter.

Rather than recognize the structural conditions that help us make choices that are not always good for us financially, it is easier to blame individuals’ personal values. Although blaming advertisers for children’s materialism seems to shift the responsibility a bit, advertising alone does not create our culture—and social structure—that encourages us all to want more than we have materially.

Notes 1. Mike Strobel, “College Student Hires Himself Out as a Living Billboard to Raise

Money for His Tuition, Discount Jeans, and Pasta Dinners,” Toronto Sun, March 4, 2006, 6;; Kate Zernike, “And Now a Word from Their Cool College Sponsor,” New York Times, July 19, 2001.

2. For examples, see the Commercialism in Education Research Unit’s website,

3. Martha Irvine, “Youthful Dreams of Wealth,” Associated Press, January 23, 2007; Harris Interactive Press Release, “Teaching Appreciation Diminishes the Impact of Materialism,” January 8, 2007, NewsID=1141; Julie Deardorff, “Boost Children’s Self-Esteem, Curb ‘Gimme’ Attitude,” Chicago Tribune, December 27, 2007.

4. Ylan Q. Mui, “At School, Labels a Runway Hit,” Washington Post, November 29, 2004, A1.

5. Nancy Gibbs et al., “Who’s In Charge Here?,” Time, August 6, 2001, 40. 6. Marci McDonald and Marianne Lavelle, “Call It ‘Kid-fluence,’” US News & World

Report, July 30, 2001, 32. 7. Susan Linn, Consuming Kids: The Hostile Takeover of Childhood; Juliet B. Schor,

Born to Buy: The Commercialized Child and the New Consumer Culture; Alissa Quart, Branded: The Buying and Selling of Teenagers.

8. Daniel Bell, The Coming of Post-Industrial Society: A Venture in Social Forecasting (New York: Basic Books, 1976).

9. Gerey Ramey and Valerie R. Ramey, “The Rug Rat Race,” Brookings Papers on Economic Activity, 2010, See also Kim Campbell, “Deprived of Parent Time? Not Most Kids,” Christian Science Monitor, April 5, 2000, 1.

10. Betsy Hart, “Kids Need Parents Who Know How to Say No,” Chicago Sun-Times, August 5, 2001, 28; Reuters, “94% of Parents Polled Say Today’s Kids Are Spoiled, but 55% Say Their Own Kids Are Part of Problem,” December 12, 2007,

11. Ann Perry, “Don’t Give Your Kids Too Much, Experts Say,” San Diego Union- Tribune, January 20, 2002, H1; Gibbs et al., “Who’s in Charge Here?,” 40; John De Graaf, David Wann, and Thomas H. Naylor, Affluenza: The All-Consuming Epidemic (San Francisco: Berrett-Koehler, 2001); Don Oldenburg, “Ads Aimed at Kids,” Washington Post, May 3, 2001, C4; Hart, “Kids Need Parents.”

12. Low income is defined as families earning less than two times the federal poverty level, middle income as two to four times the poverty threshold, and high income is at least four times the poverty level. Based on US Census Bureau, Current Population Survey, 1981 to 2010 Annual Social and Economic Supplements, and

13. Ibid.; US Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau, Child Maltreatment, 2010, 2011,

14. Christine L. Williams, Inside Toyland: Working, Shopping, and Social Inequality (Berkeley and Los Angeles: University of California Press, 2006).

15. Viviana A. Zelizer, “Kids and Commerce”; Emir E. Estrada and Pierrette

Hondagneu-Sotelo, “Intersectional Dignities: Latino Immigrant Street Vendor Youth in Los Angeles,” Journal of Contemporary Ethnography 40, no. 1 (2011): 102–131.

16. Cindy Dell Clark, Flights of Fancy, Leaps of Faith: Children’s Myths in Contemporary America.

17. Ellen Seiter, Television and New Media Audiences. 18. Amy Traub and Catherine Ruetschlin, “The Plastic Safety Net,” Demos, May 22,

2012, 19. Marty McGough, “Parents Eyeing Youth Marketing,” PR Week, February 27, 2006,

8. 20. Marilyn Elias, “Selling to Kids Blurs Ethical Picture,” USA Today, March 20, 2000,

D7. 21. Stephanie Schorow, “Sales Pitches Strike Out: Advocacy Group Protests Marketing

to Children,” Boston Herald, September 10, 2011, 31; Ira Teinowitz, “FTC Opinion Stirs Advertiser Fears; Hands-Off Stance on Violence in Marketing May Invite Legislation,” Advertising Age, November 27, 2000, 4.

22. Schorow, “Sales Pitches Strike Out”; Steven Manning, “Branding Kids for Life,” Nation, November 20, 2000, 7; Susan Linn, “Sellouts,” American Prospect, October 23, 2000, 17.

23. Manning, “Branding Kids for Life”; Lisa Prue, “Author: Advertisers Harmful to Children,” Omaha World-Herald, April 20, 2001, 39; Ronald Brownstein, “As Youths Are Bombarded with Ads, a Pro-Family Group Counterattacks,” Los Angeles Times, April 30, 2001, A5.

24. Michael Schudson, Advertising, the Uneasy Persuasion: Its Dubious Impact on American Society, 233.

25. Donna R. Powlowski, Diane M. Badzinski, and Nancy Mitchell, “Effects of Metaphors on Children’s Comprehension of Print Advertisements,” Journal of Advertising 27 (1998): 83–97.

26. Sandra L. Calvert, “Children as Consumers: Advertising and Marketing,” Future of Children 18 (2008): 216–219.

27. Deborah Roedder John, “Consumer Socialization of Children: A Retrospective Look at Twenty-Five Years of Research.” See also David M. Borish, Marian Friestad, and Gregory M. Rose, “Adolescent Skepticism Toward TV Advertising and Knowledge of Advertiser Tactics.”

28. Tamara F. Mangleberg and Terry Bristol, “Socialization and Adolescents’ Skepticism Toward Advertising.”

29. Juliet B. Schor, The Overspent American: Why We Want What We Don’t Need (New York: HarperPerennial, 1998).

30. “Teenage Consumer Spending Statistics,” Statistic Brain, February 8, 2012,

31. For discussion of the excess-profits tax, see Martha Olney, Buy Now, Pay Later: Advertising, Credit, and Consumer Durables in the 1920s, 4.

32. PBS, “The Merchants of Cool,” Frontline, February 27, 2001. 33. Quart, Branded; Linn, Consuming Kids. 34. Schudson, Advertising, the Uneasy Persuasion, 233. 35. Reinhold Bergler, director of the Institute of Psychology at the University of Bonn,

critiques what he calls “naïve everyday psychology” as employed to explain advertising’s alleged effects on children. “There are no mono-causal links between advertising and the

effect it has on behavior,” he stated in response to the belief that young people are easily manipulated by informed advertisers. Bergler, “The Effects of Commercial Advertising,” International Journal of Advertising 18 (1999): 412.

36. Lucy Henke, “Young Children’s Perceptions of Cigarette Brand Advertising Symbols: Awareness, Affect, and Target Market Identification,” Journal of Advertising 24 (1995): 13–28.

37. Andy Fry, “Just Who Are You Kidding? Techniques for Marketing to Children,” Marketing, October 9, 1997, 26; Patrick Barrett, “Are Ads a Danger to Kids?,” Marketing, September 4, 1997, 15; Jade Garrett, “Are Children an Advertiser’s Perfect Audience?,” Campaign, August 25, 2000.

38. Haley Minick, “Expensive Habits,” Fresno Bee, June 8, 2003, H8; Young Voices, “Materialistic Youth?,” Charlotte Observer, January 30, 2007.

39. Morgan Smith, “Seeking Money, Texas Schools Turn to Advertisements,” New York Times, February 16, 2012, schools-turn-to-ads-in-search-of-needed-money.html?pagewanted=all.

40. Steven Manning, “Students for Sale,” Nation, September 27, 1999, 11. 41. See Roy Fox, Harvesting Young Minds: How TV Commercials Control Kids

(Westport, CT: Praeger, 2000). 42. Mike A. Males, Framing Youth: Ten Myths About the Next Generation, chap. 9. 43. Greg Toppo and Janet Kornblum, “Ads on Tests Add Up for Teacher,” USA Today,

December 2, 2008, A1. 44. “Massachusetts Could Ban Advertising in Schools,” radio broadcast by KPFK Los

Angeles, August 3, 2007, 45. Manning, “Branding Kids for Life.” 46. See Naomi Klein, No Logo: Taking Aim at the Brand Bullies (New York: Picador

USA, 1999). 47. US Environmental Protection Agency, “Statistics on the Management of Used and

End-of-Life Electronics” (Washington, DC: Government Printing Office, 2009),

48. Sierra Club, “The World Trade Organization: Trading Away Environmental Health and Safety,”

49. Jennifer Alsever, “The ‘Green’ Way to Dump Electronic Junk,”, April 22, 2008,

50. US Environmental Protection Agency, “Textiles” (Washington, DC: Government Printing Office, 2012),

51. Lan Nguyen Chaplin and Deborah Roedder John, “Growing Up in a Material World: Age Differences in Materialism in Children and Adolescents,” Journal of Consumer Research 34 (2007): 480–493. See also

52. US Census Bureau, Current Population Survey 2009, “Table 690: Money Income of Households—Percent Distribution by Income Level, Race, and Hispanic Origin in Constant (2009) Dollars” (Washington, DC: Government Printing Office, 2011),; College Board, “Debt by Degree,” in Trends in Student Aid, 2011 (New York: College Board, 2011),

53. Faiza Elmasry, “New Generation Revolutionizes Environmental Activism,” Voice of America, June 19, 2011,


Massuchessets Could Ban Advertising in Schools




Beyond Popular Culture Why Inequality Is the Problem

Popular culture may seem like the central cause of a host of problems today based on news reports, commentary, and the publicity surrounding often questionable studies. Coupled with concerned citizens and politicians, this helps create the now taken-for-granted belief that media content is a major problem. Claims makers actively work to raise awareness of what they see as the pop culture problem, which occasionally rises to the level of a moral panic. The content of popular culture is important to examine, but it just isn’t causing the problems that concern people most.

We have plenty of examples of media representations that portray less than ideal behaviors. Although they are tempting explanations for social problems, especially for those who believe that young viewers will imitate what they see, popular culture is not the central cause of changes in childhood, bullying, suicide, educational failure, violence, sexual behavior, teen pregnancy, single parenthood, eating problems, substance abuse, or materialism. It is not the main reason inequality, racism, sexism, and homophobia persist. We certainly see all of these issues reflected in popular culture, and media representations can reinforce some of these things.

But if we really want to improve public education and reduce violence, teen pregnancy, single parenting, and other issues of concern, we need to understand what the main causes are. The media, in its many varied forms, seem like a reasonable explanation because they are by nature highly visible, clamoring for our attention. And ironically, to get our attention the news media often invoke fear of popular culture, further legitimizing the concerns of many people that the media are the main problem.

As corporate entities, media conglomerates have a lot vested in the status quo. If culture is the problem, even culture their company may take part in creating, then we stay focused on media content rather than public policies and the broader social structure. For instance, the 1996 Telecommunications Act (which is rarely scrutinized as harshly as popular culture in news reports today) enabled behemoth media conglomerates to become even bigger, to create even larger monopolies in the production of media culture. These corporations also benefit from a tax structure that minimizes their liability. They (and the politicians they lobby) benefit from our tendency to view poverty and lack of opportunity as the result of individual failings rather than a public problem. Currently, our federal budget is composed of taxes mainly collected from individuals; only 9 percent came from


corporations in 2010, compared with 17 percent in the 1980s and 39 percent in the 1950s.1

Media conglomerates have a lot to gain by keeping us focused on the popular culture problem, lest we decide to close some of the corporate-tax loopholes to fund more social programs. More and more, the news is just one arm of an octopus designed to feed shareholders, so the more laissez-faire our public policy remains, the better for them. It’s a win-win situation for the corporations: media phobia deflects attention away from public policy solutions and onto media culture, which the First Amendment largely protects from regulation. While we are busy clamoring for more restraint and changes in content, there is little threat of any real change in social structure or challenge to business as usual. In short, the news media promote media phobia because it doesn’t threaten the bottom line. Calling for major policy changes to reduce inequality and poverty would.

The Problem of Poverty

Throughout this book I have discussed the widespread impact of child poverty— children constitute a majority of the nation’s poor, and poverty is closely linked with many of the problems that we usually blame the media for, such as violence, school failure, teen pregnancy, and obesity. While the “media made me do it” stories grab headlines, we avoid confronting the most significant problem facing American kids: the fact that more than 15 million children live in poverty.2

Ads for starving children in faraway places might gain our sympathy, but poor American children are often invisible or just seen as a threat to public safety. Traditionally, the American focus on individualism holds poor people solely responsible for their own predicament. This encourages us to ignore the plight of most of these kids.

Figure 11.1: Percent of American Children Under 18 Below Poverty Line, 2009 Source: US Census

Children under six are the most likely to live in poor families, as one in five


American children under six lives in poverty and 48 percent of children under three live in low-income families. Nearly 11 percent of all American children face food insecurity, meaning their families have significant difficulty providing enough food. And a lot of their parents work: according to the National Center for Children in Poverty at Columbia University, 47 percent of low-income families (living at double the poverty rate or less) with children have at least one parent who worked full-time in 2010, and another 29 percent had at least one parent who worked full- or part-time for at least some of 2010.3

Being poor has a profound impact on children, who are less likely to have regular access to health care, less likely to see a dentist, and more likely to experience chronic problems like asthma and obesity, as well as family stressors associated with poverty.4 Their parents are less likely to be married or remain married and are more likely to experience depression.5 When these children begin school, they tend to be less prepared than their more affluent peers and attend schools that have less experienced teachers, older materials, and larger class sizes.6 Combined with these challenges is a greater likelihood for low-income children to live in high-violence communities, particularly in urban areas that remain highly segregated, not just by race or ethnicity but by economic status. Yes, they might spend more time watching television or using other forms of media, but this is an effect of social structure.

The problem of poverty also reflects the relationship between income and race and ethnicity. Most poor children in America are white (more than 10 million of the estimated 15 million people in poverty under eighteen), but a disproportionate percentage of children in poverty are African American or Latino, relative to their proportion in the US population.7

African American and Latino children in poverty are also more likely to live in neighborhoods with higher concentrations of poverty than poor whites. Scholars at the Harvard School of Public Health call this pattern “double jeopardy,” or a dual disadvantage for children who are both poor and African American or Latino.8

These segregated communities of concentrated poverty are the direct result of postwar policies such as redlining, when banks refused to lend money to white people living in neighborhoods with African Americans or Latinos, encouraging them to flee central cities. Programs like the GI Bill enabled many white families to buy homes with little or no money down and in some cases pay less to move into a newly constructed home than to remain in an urban area. Thus, whites who could receive mortgages in new suburban developments moved away, leaving people of color behind in communities that gradually decayed due to lack of investment.

As urban communities declined, businesses and jobs moved away too, as did other basic services like hospitals and supermarkets. With an eroded tax base, schools also lost a major source of revenue, and well-qualified teachers left, too.


As sociologists Robert Sampson and William Julius Wilson describe, social isolation resulted from decades of divestment, with white and middle-class flight leaving a high concentration of poor African Americans and Latinos behind. People in these communities confront higher rates of violence and other illegal activity, increased rates of single parenthood and teen pregnancy, and higher high school dropout rates.9 These are the central causes of many of the problems that we tend to blame on popular culture.

Pop Culture Diversions and Economic Realities

Not only does popular culture provide its consumers a temporary escape from their own lives, but it also diverts our attention from the extent to which poverty and inequality remain cancers in American society. As long as we convince ourselves that television and video games are behind disparities in academic achievement, we can stop thinking about the fact that our public schools are still very segregated racially and economically and remain vastly unequal. High school dropout rates for Latinos and blacks dwarf those of Asian Americans and whites, and nearly one in five foreign-born kids does not finish high school.10 It is easy to think of this as the failure of individuals, but it is also the failure of schools to provide students with the basic tools to succeed academically. Assigned to overcrowded classrooms in crumbling buildings, the state of many of our nation’s urban schools, young people come to believe they aren’t important and that their education does not matter.

The kids in these schools must also navigate violent communities once the school day is done. Yes, video games might seem like they play an important role in creating violence, but only if we pretend that kids are not exposed to violence anywhere besides the media. For some children, the violence starts right at home. A study of more than nine thousand respondents found that those who experienced some form of child abuse were significantly more likely to perpetrate violence (including against an intimate partner) than those who did not experience a form of abuse as a child.11 This might seem like common sense, but it is often thrown out the window when we zero in on popular culture and exclude these and other major factors.

As with violence, teen pregnancy is closely linked with poverty. Janet Rich- Edwards of the Harvard Medical School wrote, “Poverty, not maternal age, [is] the real threat to maternal and infant welfare. It is not just the disadvantaged, but the ‘discouraged among the disadvantaged’ who become teen mothers.”12 As counterintuitive as it may seem considering the high costs associated with child rearing, poor teens may feel like they have less to lose by becoming parents, coupled with often limited access to comprehensive sex education and birth


control. A 2008 New Yorker article examined the high teen pregnancy rates of

evangelical teens, the group with perhaps the strongest cultural norms against premarital sex. If culture were the best predictor, we might presume that these teens would have the lowest teen pregnancy rates. Yet as sociologist Lisa A. Keister found in her analysis of Bureau of Labor statistics, religion is highly correlated with wealth. Members of conservative Christian denominations’ net worth were approximately half the median net worth of Americans more generally.13 Besides limited sex education, this often translates to a perception of fewer economic opportunities in the future, and therefore less of a disincentive for an early pregnancy.

Early pregnancy may also lead to early marriage, a strong predictor of divorce. Although daytime talk shows might focus on a spouse’s “Internet addiction” or too much time playing video games, economic instability is the biggest threat to families today. Children in single-parent families also face unique challenges, including limited school readiness, reduced supervision, and, most centrally, an increased likelihood of living in poverty.14

Single-parent households are often the result of economic inequality, rather than the cause, as federal marriage promotion programs imply. There is also a stark connection between the percentage of single-parent families and race: 66 percent of African American, 52 percent of American Indian, and 41 percent of Latino children live with an unmarried parent. By contrast, 24 percent of white and 16 percent of Asian American children live in a single-parent household.15 Clearly, there is a strong relationship between these numbers and the percentage of children in poverty by race, and thus it is difficult to untangle the problems caused by single- parent families from those of poverty.

Low-income parents may also face difficulty providing their children with healthy diets, particularly if they live in urban areas with few grocery stores and limited access to fresh produce. Couple this with little time to prepare meals and the abundance of fast food chains offering cheap food quickly in poorer neighborhoods, and we see why obesity is higher for lower-income children.16 Race and ethnicity are important predictors here as well; once again, rather than simply a result of culture, we can also locate the issue of obesity as rooted in the intersection between race and socioeconomic status. The same children who have limited access to regular health care are also the most vulnerable to obesity and its related complications.

In addition to limited access to health care, low-income adults with drug or alcohol problems often have few options for rehabilitation; at the same time, the stress of struggling to get by can make substance use or overeating more appealing. Although these issues are certainly not limited to the poor, the complications can


become more dramatic for those without access to treatment. With increased surveillance of the drug trade in urban centers, many users are

likely to find themselves incarcerated, America’s largest institution for drug abusers. Parental incarceration adds to family instability and poverty: parents in prison can’t help provide for their family financially. Once released, they face even greater difficulty finding work due to a prison record. According to Nell Bernstein, author of All Alone in the World: Children of the Incarcerated, parents who are incarcerated far from home are less likely to have frequent contact with their children and face a greater likelihood of long-term estrangement. With less family supervision and fewer parental connections, children are prone to repeat the cycle of crime and substance abuse.17

When low-income people fall prey to substance abuse or otherwise struggle, their failures are often made visible. It is easy for the affluent to see the results of inequality to further justify their prejudices. The scarcity of quality educational opportunities, jobs, and social services is not always clear to those who have not experienced the same struggles. The failure to fully comprehend the challenges faced by those at the bottom of the socioeconomic ladder is not surprising; it is built into the structure of our social system. Americans often have little interpersonal interactions with people of differing economic circumstances, and geographic segregation makes many people’s day-to-day experiences a mystery.

This helps us believe that the challenges people face are largely of their own making rather than systematic inequality or discrimination. As long as we continue to focus only on individual effort and overlook social structure, we might believe that sexism and racism are just cards people “play” rather than persistent patterns, for instance. Yes, we have made gains in reducing overt discrimination, but it still often lurks beneath the surface: the job not offered because a female candidate is not seen as aggressive enough or an African American candidate is presumed to lack motivation for no clear reason.

But evidence of discrimination still exists: for example, both women and persons of color were disproportionately targeted with subprime mortgages before the 2008 collapse, even when they qualified for mortgages with lower interest rates.18 Maintaining inequality, be it based on race, gender, or sexual orientation, serves to uphold the social order as we know it, and those at the top of the hierarchies benefit directly from their existence. Unraveling the tapestry of inequality requires us to look critically at American society, something that many people are reluctant to do. It’s much easier to blame popular culture.

Why We Blame the Media

If poverty and inequality are so closely connected with the problems we claim to


hope to solve, why do we focus so much on popular culture? We don’t like to talk about poverty or racism in America; the persistent relationship between race and class challenges the American dream of equal opportunity. As much as we might hope to believe that the civil rights movement and later electing a black president ended all traces of racial inequality, it was a beginning, but certainly not the end in the process of creating equality.

We might look to those with great wealth and prosperity and believe we can all get there—a more hopeful illusion than to focus on the many children with no health insurance attending deteriorating schools. When we do talk about poverty, we tend to blame only the poor themselves; we tell them they aren’t educated enough, are too lazy, lack ambition, and are dependent on welfare. Congress and former president Clinton decided in 1996 that welfare was the problem, not poverty or the untenable minimum wage, and passed “welfare reform” legislation, which limited the amount of aid families in poverty could receive. Children are in the middle of this politically charged debate and stand to lose the most. To face problems associated with poverty, we will have to rethink public policy choices and consider using more resources to bring more families out of poverty.

But not all of our biggest challenges are related to poverty and inequality. Even well-funded schools merit a closer look to examine how well they can meet the needs of their students, how well they inspire rather than alienate the young people they are meant to serve. Schools tend to focus on conformity—both academically and socially—and need to take responsibility to support those who don’t quite fit in. This, of course, can be accomplished only when the public provides better support for school systems and teachers.

If we truly want to reduce violence in America, the answer will not be found by limiting who can buy violent video games but rather by confronting the persistence of inequality. But this is no easy task, particularly when it comes time to foot the bill for solutions. Blaming media is a cheap campaign decision—it costs relatively little to hold hearings, compared with getting at the heart of what causes violence. Blaming media takes us all off the hook: we can point our fingers at media producers and at parents we don’t find restrictive enough.

Bottom line: fear of media sells. The press is central in the development and perpetuation of fear—even of the media itself. As sociologist David Altheide writes in Creating Fear: News and the Construction of Crisis, things that scare us make for good drama, draw us closer, and create a sense that we need to watch or read more for our safety and, most compellingly, for the safety of children. But too much fear tends to backfire, particularly if we feel we can’t gain control of the threat. As social psychological research demonstrates, scary messages work best if the fear they create is only mild.19 When people feel too scared, too out of control, they tend to go into denial and ignore the frightening information. So providing low doses of seemingly manageable fear draws viewers in, but presenting complex


problems may make people want to avoid the news. This is why the things that we need to be concerned about, the problems without easy solutions, are not as compelling as news stories. They scare us too much.

Media fear resonates with preexisting anxieties about youth culture and new media technologies. It would be too simple to say that this fear is only the press’s fault. As I have emphasized throughout this book, both youth and youth culture are symbolic of change, of a loss of adult centrality and control. Popular culture is often the bastion of the young and in many ways reflects the contemporary experience of youth, which often seems frightening to adults.

Lessons from Popular Culture

Pop culture matters; media analysis is a great tool for exposing the complexities of issues like violence, gender and sexuality, racism, and homophobia. Our media culture provides a great text for both artistic and social criticism. We can ask questions, such as, what do shows such as CSI, NCIS, and Law and Order teach us about perceptions of the ability of the justice system to catch perpetrators, as well as how the usual victims and suspects are portrayed? We can watch The Office to discuss some of the banalities of corporate job experiences, or consider what The Real Housewives franchise told us about contradictions within gender and power, or how the relative absence of nonwhite professionals in a drama series reflects inequalities of race. We might look at fashion magazines and analyze why so many of the models are super-thin, rather than just focus on possible effects of their appearance. Considering images of beauty in the context of the construction of gender and sexuality is one way to use such images as a point of learning, not just condemnation.

Instead of media phobia, where we complain about what media content might do to us, we should engage in more media analysis. This means critically exploring representations of gender, sexuality, and race, for instance, and considering how these representations may reflect social inequality. What we ought to be discussing about media is who produces it, and for what purpose. In the United States especially, most media are produced for profit and are often created for some audiences and consumers but not others. Although green is the only real color of interest, there is more green to be found in some racial or ethnic groups than others, and these groups are more likely to have media produced with them in mind.

We can learn a lot about race, class, gender, sexuality, and age by studying media representations and linking them to systems of power. It is simply not enough to spot these patterns within media, but we also need to implicate other social and historical factors that create such conditions.

Sexism, for instance, wasn’t born with the advent of magazines, movies, or


television, but it does live and breathe there. By paying sexism a “visit” within media culture, for instance, we learn a lot more about it. Media content is a poor predictor of individual behavior but an excellent window through which we can understand social relations of power. As with any form of self-scrutiny, we tend to avoid looking at media culture in this way. Like media analysis, societal analysis doesn’t mean America bashing, but rather it means an honest look at where we are and what we want to improve. Change is possible only when we dig below the surface, below media content, to critically explore issues of power in America. Being media savvy alone does not necessarily lead to critical analysis, which should be a part of being educated in the twenty-first century. Media-literacy education, which seeks to increase critical awareness of how texts are produced and how they represent (or fail to represent) real social issues, is essential. This is not to say that parents should ignore what their kids are watching or listening to. But parents do need to recognize that kids’ taste in popular culture is not necessarily an indication that their values are different, but rather a sign that their needs may be met through listening to music that parents may not approve of. To some degree, growing up happens with peers and away from parents. Many parents understand this, but it is sometimes difficult. Kids, especially as they become older, do need some space to enjoy popular culture on their own without having to explain themselves. That said, a supportive parent who can listen without judging will understand their kids better and have a much greater impact than popular culture ever can. Informed trust and supportive monitoring will be much more productive than attempting to heavily regulate and control teens’ media choices.

Kids aren’t the only ones who need to work toward becoming critical media consumers. Adults often believe that kids don’t know the difference between fact and fiction, but adults could stand to question news reports and political pundits in this age of political polarization. This means questioning what we are told are facts by the news media and challenging the logic of hyperconsumption, that more is better and that fulfillment and good citizenship are accomplished by spending. Focusing on these other issues doesn’t mean that popular culture doesn’t merit our criticism and scrutiny. Media criticism can be the first step in beginning social analysis; it should never be the only focus of social analysis, however.

Media Culture: A Sheep in Wolf’s Clothing

Although media may reflect and remind us of troubling social conditions, media are not the central cause of violence and the other things that truly scare us. Even though media culture may not be central in creating violence or promiscuity, it is nonetheless very powerful. Its power shapes how we schedule and fill our days and influences how we interact with others. Its form and content often shape how we


talk to each other, what we talk about, and how we think about ourselves and the world around us. Its emergence has certainly altered other institutions, such as education, government, and religion.

Keeping this in mind, media merit critical analysis, as do all social institutions. The news media in particular cast a very powerful spotlight, directing our attention onto some issues and away from others. Its power is feared as harmful, but it is much more complex than that. The biggest harm media power can yield is not in creating killers, but in creating complacency.

This complacency is not due to fictional entertainment, as we so often fear; it is created from news reports based on emotion and drama rather than citizenship. We are lulled not by music or movies or video games, but by programs passing as news that only skim the surface of what we need to know about our government, our corporations, and our society. I say that media culture is a sheep in wolf’s clothing because it gets our attention and seems scary, but underneath it is much more of a follower than a real leader or creator of change. Media phobia challenges nothing and fails to address the central problems that do affect millions of Americans. The media sheep play follow the leader.

It doesn’t have to be this way. Because media culture is so enchanting, so attention seeking, it can be used to redirect our attention to the sources of our society’s problems and to provide us with a wakeup call about the persistence of inequality in the United States. Although changing media culture may truly concern us at times, we need to be sure to keep our real challenges in sight. It would be a mistake to focus only on the negative in these changing times, overlooking the positive aspects of both media culture and the next generation. The issues I address in this book—education, violence, teen pregnancy, family instability, health, substance use, sexism, racism, and homophobia—all merit our attention. In order to address them directly, we can’t be distracted by the lure of popular culture, which is ultimately not the key problem, nor is its control the solution.

Notes 1. Executive Office of the President, Budget of the United States Government: Fiscal

Year 2012 (Washington, DC: Government Printing Office, 2011),

2. US Bureau of the Census, Income, Poverty, and Health Insurance Coverage in the United States: 2010, Report P60, n. 238, Table B-2, 68–73,

3. Sophia Addy and Vanessa R. Wight, “Basic Facts About Low-Income Children, 2010,” (New York: National Center for Children in Poverty, 2012),; Ayana Douglas-Hall and Michelle Chau, “Basic Facts About Low-Income Children: Birth to Age 18” (New York: National Center for Children in Poverty, 2008),; Vanessa R. Wight and Kalyani Thampi, “Basic Facts About Food Insecurity Among Children in the

United States, 2008” (New York: National Center for Children in Poverty, 2010),; Addy and Wight, “Basic Facts.”

4. Kay Johnson and Suzanne Theberge, “Reducing Disparities Beginning in Early Childhood” (New York: National Center for Children in Poverty, 2008),; Federal Interagency Forum on Child and Family Statistics, “Health Care,” in America’s Children in Brief: Key National Indicators of Well-Being, 2008 (Washington, DC: Government Printing Office, 2008),

5. See marriage data in Johnson and Theberge, “Reducing Disparities.” 6. See Karen Sternheimer, Kids These Days: Facts and Fictions About Today’s Youth,

69–71. 7. US Census Bureau, Current Population Survey, “Age and Sex of All People, Family

Members, and Unrelated Individuals Iterated by Income-to-Poverty Ratio and Race, 2010,” in Annual Social and Economic (ASEC) Supplement (Washington, DC: Government Printing Office, 2011),

8. Dolores Acevedo-Garcia et al., “Toward a Policy-Relevant Analysis of Geographic and Racial/Ethnic Disparities in Child Health,” Health Affairs 27, no. 2 (2008): 321–333.

9. Robert J. Sampson and William Julius Wilson, “Toward a Theory of Race, Crime, and Urban Inequality,” in Crime and Inequality, edited by John Hagan and Ruth Peterson (Stanford, CA: Stanford University Press, 1995); US Census Bureau, “High School Dropout Rates,” October Current Population Survey, various years (Washington, DC: Government Printing Office, 2007),

10. Child Trends, “High School Dropout Rates,” Child Trends Data Bank, 2012,

11. Xiangming Fang and Phaedra S. Corso, “Child Maltreatment, Youth Violence, and Intimate Partner Violence-Developmental Relationships,” American Journal of Preventive Medicine 33, no. 4 (2007): 281–290.

12. Janet Rich-Edwards, “Teen Pregnancy Is Not a Public Health Crisis in the United States. It Is Time We Made It One,” International Journal of Epidemiology 31 (2002): 555–556,,R10.

13. Margaret Talbot, “Red Sex, Blue Sex: Why Do So Many Evangelical TeenAgers Become Pregnant?,” New Yorker, November 3, 2008; Lisa A. Keister, “Religion and Wealth: The Role of Religious Affiliation and Participation in Early Adult Asset Accumulation,” Social Forces 82, no. 1 (2003): 175–207.

14. Tamara Halle et al., “Background on Community-Level Work on School Readiness,” Child Trends, 2000,

15. “Children in Single-Parent Families, by Race: 2010” (Baltimore, MD: Annie E. Casey Foundation, 2012), ind=107.

16. For further discussion, see Sternheimer, Kids These Days, chap. 2. 17. Devah Pager, “The Mark of a Criminal Record,” American Journal of Sociology

108, no. 5 (2003): 937–975; Nell Bernstein, All Alone in the World: Children of the Incarcerated (New York: New Press, 2005).

18. Patrick Tucker, “Subprime Lenders Target Women Unfairly,” Futurist 41, no. 7 (2007),

19. Irving Janis and Seymour Feshback, “Effects of Fear-Arousing Communications,” Journal of Abnormal and Social Psychology 48 (1953): 78–92.



Adams, Terri M., and Douglas B. Fuller. “The Words Have Changed but the Ideology Remains the Same: Misogynistic Lyrics in Rap Music.” Journal of Black Studies 36, no. 6 (2006): 938–957.

Adler, Patricia A., and Peter Adler. Peer Power: Preadolescent Culture and Identity. New Brunswick, NJ: Rutgers University Press, 1998.

Altheide, David. Creating Fear: News and the Construction of Crisis. New York: Aldine de Gruyter, 2002.

Anderson, Daniel R. “Educational Television Is Not an Oxymoron.” Annals of the American Academy of Political and Social Science 557 (May 1998): 24–38.

Anderson, Daniel R., et al. “Early Childhood Television Viewing and Adolescent Behavior: The Recontact Study.” Monographs of the Society for Research in Child Development 66 (2001): 1–154.

Anderson, Elijah. Streetwise: Race, Class, and Change in an Urban Community. Chicago: University of Chicago Press, 1990.

Ang, Ien. Living Room Wars: Rethinking Audiences for a Postmodern World. London: Routledge, 1996.

Ariès, Phillipe. Centuries of Childhood: A Social History of Family Life. New York: Random House, 1962.

Arnett, Jeffrey Jensen. “Adolescents’ Uses of Media for Self-Socialization.” Journal of Youth and Adolescence 24 (1995): 519–533.

Bailey, Beth L. From Front Porch to Back Seat: Courtship in Twentieth-Century America. Baltimore, MD: Johns Hopkins University Press, 1989.

Barker, Martin, and Julian Petley, eds. Ill Effects: The Media/Violence Debate. London: Routledge, 1997.

Bauerlein, Mark. The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (or, Don’t Trust Anyone Under 30). New York: Tarcher, 2009.

Best, Joel. Damned Lies and Statistics: Untangling Numbers from Media, Politicians, and Activists. Berkeley and Los Angeles: University of California Press, 2001.

———. Random Violence: How We Talk About New Crimes and New Victims. Berkeley and Los Angeles: University of California Press, 1999.

———. The Stupidity Epidemic: Worrying About Students, Schools, and America’s Future. New York: Routledge, 2011.

Binder, Amy. “Constructing Racial Rhetoric: Media Depictions of Harm in Heavy Metal and Rap Music.” American Sociological Review 58 (1993): 753–767.

Borish, David M., Marian Friestad, and Gregory M. Rose. “Adolescent Skepticism Toward TV Advertising and Knowledge of Advertiser Tactics.” Journal of Consumer Research 21 (1994): 166–175.

Brumberg, Joan Jacobs. Fasting Girls: The History of Anorexia Nervosa. New York:


Vintage Books, 2000. Bryson, Bethany. “’Anything but Heavy Metal’: Symbolic Exclusion and Musical Dislikes.”

American Sociological Review 61 (1996): 884–899. Buckingham, David. After the Death of Childhood: Growing Up in the Age of Electronic

Media. Cambridge: Polity Press, 2000. ———. The Making of Citizens: Young People, News, and Politics. London: Routledge,

2000. ———. “Media Education in the U.K.: Moving Beyond Protectionism.” Journal of

Communication 1 (1998): 33–43. ———, ed. Reading Audiences: Young People and the Media. Manchester: Manchester

University Press, 1993. Calvert, Karin. Children in the House: Material Culture of Early Childhood, 1600–1900.

Boston: Northeastern University Press, 1992. Calvert, Sandra. Children’s Journeys Through the Information Age. Boston: McGraw-Hill,

1999. Clark, Cindy Dell. Flights of Fancy, Leaps of Faith: Children’s Myths in Contemporary

America. Chicago: University of Chicago Press, 1995. Cobb, Michael D., and William A. Boettcher III. “Ambivalent Sexism and Misogynistic Rap

Music: Does Exposure to Eminem Increase Sexism?” Journal of Applied Social Psychology 37, no. 12 (2007): 3025–3042.

Cohen, Stanley. Folk Devils and Moral Panics. 3rd ed. New York: Routledge, 2002. Connell, R. W. Masculinities. 2nd ed. Cambridge: Polity Press, 2005. Coontz, Stephanie. Marriage, a History: From Obedience to Intimacy; or, How Love

Conquered Marriage. New York: Viking, 2005. Cooper, Cynthia. Violence on Television: Congressional Inquiry, Public Criticism, and

Industry Response—a Policy Analysis. Lanham, MD: University Press of America, 1996.

Corsaro, William A. The Sociology of Childhood. 2nd ed. Thousand Oaks, CA: Sage, 2004.

Côté, James E., and Anton L. Allahar. Generation on Hold: Coming of Age in the Late Twentieth Century. New York: New York University Press, 1994.

Crawford, Garry. “The Cult of the Champ Man: The Cultural Pleasures of Championship Manager/Football Manager Games.” Information, Communication, and Society 9 (2006): 523–540.

Crawford, Garry, and Victoria Gosling. “Toys for Boys? Marginalization and Participation as Digital Gamers.” Sociological Research Online 10, no. 1 (2005).

Dayan, Daniel, and Elihu Katz. Media Events: The Live Broadcasting of History. Cambridge, MA: Harvard University Press, 1992.

Donovan, Barna William. Blood, Guns, and Testosterone: Action Films, Audiences, and a Thirst for Violence. Lanham, MD: Scarecrow Press, 2010.

Douglas, Susan J. The Rise of Enlightened Sexism: How Pop Culture Took Us from Girl Power to Girls Gone Wild. New York: St. Martin’s Griffin, 2010.

Eder, Donna, Catherine Colleen Evans, and Stephen Parker. School Talk: Gender and Adolescent Culture. New Brunswick, NJ: Rutgers University Press, 1995.

Felson, Richard. “Mass Media Effects on Violent Behavior.” Annual Review of Sociology 22 (1996): 103–129.

Fiske, John. Media Matters: Everyday Culture and Political Change. Minneapolis:


University of Minnesota Press, 1994. Flynn, James R. What Is Intelligence? New York: Cambridge University Press, 2007. Fowles, Jib. The Case for Television Violence. Thousand Oaks, CA: Sage, 1999. Freedman, Jonathan L. Media Violence and Its Effect on Aggression. Toronto: University

of Toronto Press, 2002. Gaddy, Gary D. “Television’s Impact on High School Achievement.” Public Opinion

Quarterly 50, no. 3 (1986): 340–359. Gauntlett, David. Moving Experiences: Understanding Television’s Influences and Effects.

London: John Libbey, 1995. ———. “Ten Things Wrong with the Effects Model.” In Approaches to Audiences: A

Reader, edited by Roger Dickinson, Ramaswani Harindranath, and Olga Linné. London: Arnold, 1998.

Giroux, Henry. Channel Surfing: Racism, the Media, and the Deconstruction of Today’s Youth. New York: St. Martin’s Press, 1998.

———. The Mouse That Roared. New York: Rowman and Littlefield, 1999. Gitlin, Todd. “Media Sociology: The Dominant Paradigm.” Theory and Society 6 (1978):

205–253. ———. Media Unlimited: How the Torrent of Images and Sounds Overwhelms Our

Lives. New York: Metropolitan Books, 2001. Glassner, Barry. The Culture of Fear: Why Americans Are Afraid of the Wrong Things.

New York: Basic Books, 2010. Goldman, Robert, and Stephen Papson. Sign Wars: The Cluttered Landscape of

Advertising. New York: Guilford Press, 1996. Gorman, Lyn, and David McLean. Media and Society in the Twentieth Century: A

Historical Introduction. New York: Blackwell, 2003. Greer, Chris, ed. Crime and Media: A Reader. London: Routledge, 2010. Gunter, Barrie, and Jill L. McAleer. Children and Television: The One-Eyed Monster?

New York: Routledge, 1990. Hartley, John. The Politics of Pictures: The Creation of the Public in the Age of Popular

Media. London: Routledge, 1992. Heins, Marjorie. Not in Front of the Children: “Indecency,” Censorship, and the

Innocence of Youth. New Brunswick, NJ: Rutgers University Press, 2007. Hine, Thomas. The Rise and Fall of the American Teenager: A New History of the

American Adolescent Experience. New York: Perennial, 1999. Hodge, Robert, and David Tripp. Children and Television: A Semiotic Approach.

Stanford, CA: Stanford University Press, 1986. Hoffner, Cynthia, et al. “The Third-Person Effect in Perceptions of the Influence of

Television Violence.” Journal of Communication 51 (2001): 283–298. Ingraham, Chrys. White Weddings: Romancing Heterosexuality in Popular Culture. 2nd

ed. New York: Routledge, 2008. James, Allison, Chris Jenks, and Alan Prout. Theorizing Childhood. New York: Teacher’s

College Press, 1998. James, Allison, and Alan Prout. Constructing and Reconstructing Childhood:

Contemporary Issues in the Sociological Study of Childhood. London: Falmer Press, 1997.

Jenkins, Henry, ed. The Children’s Culture Reader. New York: New York University Press, 1998.


John, Deborah Roedder. “Consumer Socialization of Children: A Retrospective Look at Twenty-Five Years of Research.” Journal of Consumer Research 26 (1999): 204.

Johnson, Steven. Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter. New York: Riverhead Books, 2005.

Jones, Gerard. Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make- Believe Violence. New York: Basic Books, 2002.

Kelley, Peter, David Buckingham, and Hannah Davies. “Talking Dirty: Children, Sexual Knowledge, and Television.” Childhood 6, no. 22 (1999): 221–242.

Kincaid, James R. Child-Loving: The Erotic Child in Victorian Literature. New York: Routledge, 1992.

Kincheloe, Joe L. “The New Childhood: Home Alone as a Way of Life.” In The Children’s Culture Reader, edited by Henry Jenkins. Boulder, CO: Westview Press, 1998.

King, Cynthia M. “Effects of Humorous Heroes and Villains in Violent Action Films.” Journal of Communication 1 (2000): 5–24.

Kitsuse, John, and Malcolm Spector. Constructing Social Problems. Edison, NJ: Transaction, 2000.

Kitzinger, Jenny. “Who Are You Kidding? Children, Power, and the Struggle Against Sexual Abuse.” In Constructing and Reconstructing Childhood: Contemporary Issues in the Sociological Study of Childhood, edited by Allison James and Alan Prout. London: Falmer Press, 1997.

Kremar, Marina, and Kathryn Greene. “Predicting Exposure to and Uses of Television Violence.” Journal of Communication 3 (1999): 24–45.

Lears, Jackson. Fables of Abundance: A Cultural History of Advertising in America. New York: Basic Books, 1994.

Linn, Susan. Consuming Kids: The Hostile Takeover of Childhood. New York: New Press, 2004.

Loe, Meika. “The Prescription of a New Generation.” Contexts 7, no. 2 (2008): 46–49. Louv, Richard. Childhood’s Future. New York: Anchor Books, 1990. Majors, Richard, and Janet Mancini Billson. Cool Pose: The Dilemmas of Black Manhood

in America. New York: Touchstone, 1992. Males, Mike A. Framing Youth: Ten Myths About the Next Generation. Monroe, ME:

Common Courage Press, 1999. ———. The Scapegoat Generation: America’s War on Adolescents. Monroe, ME:

Common Courage Press, 1996. ———. Teenage Sex and Pregnancy: Modern Myths, Unsexy Realities. Santa Barbara,

CA: Praeger, 2010. Mander, Jerry. Four Arguments for the Elimination of Television. New York: Morrow

Quill Paperbacks, 1978. Mangleberg, Tamara F., and Terry Bristol. “Socialization and Adolescents’ Skepticism

Toward Advertising.” Journal of Advertising 27 (1998): 11–20. McCombs, Maxwell E., and Donald L. Shaw. “The Agenda-Setting Function of the Mass

Media.” Public Opinion Quarterly 36, no. 2 (1972): 176–187. Medved, Michael. Hollywood vs. America: Popular Culture and the War on Traditional

Values. New York: HarperCollins, 1992. Morley, David. Television, Audiences, and Cultural Studies. New York: Routledge, 1992. Nasaw, David. Children of the City: At Work and at Play. New York: Oxford University

Press, 1986.


Olney, Martha. Buy Now, Pay Later: Advertising, Credit, and Consumer Durables in the 1920s. Chapel Hill: University of North Carolina Press, 1991.

Palladino, Grace. Teenagers: An American History. New York: Basic Books, 1996. Patchin, Justin W., and Sameer Hinduja. Cyberbullying Prevention and Response: Expert

Perspectives. New York: Routledge, 2012. Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show

Business. New York: Penguin Books, 1985. Potter, W. James, and Ron Warren. “Considering Policies to Protect Children from TV

Violence.” Journal of Communication 4 (1996): 116–138. Quart, Alissa. Branded: The Buying and Selling of Teenagers. New York: Basic Books,

2003. Robinson, Thomas N., Helen L. Chen, and Joel D. Killen. “Television and Music Video

Exposure and Risk of Adolescent Alcohol Use.” Pediatrics (1998): 102–107. Rose, Tricia. “’Fear of a Black Planet’: Rap Music and Black Cultural Politics in the

1990s.” Journal of Negro Education 60 (1991): 276–290. Sargent, James D., et al. “Effect of Parental R-Rated Movie Restriction on Adolescent

Smoking Initiation: A Prospective Study.” Pediatrics 114 (2004): 149–156. ———. “Exposure to Smoking Depictions in Movies.” Archives of Pediatric and

Adolescent Medicine 161 (2007): 849–856. Schor, Juliet B. Born to Buy: The Commercialized Child and the New Consumer Culture.

New York: Scribner, 2004. Schudson, Michael. Advertising, the Uneasy Persuasion: Its Dubious Impact on American

Society. New York: Basic Books, 1984. Schwartz, Hillel. Never Satisfied: A Cultural History of Diets, Fantasies, and Fat. New

York: Free Press, 1986. Scott, Derek. “The Effect of Video Games on Feelings of Aggression.” Journal of

Psychology 129 (1995): 121–133. Seidel, Ruth. Keeping Women and Children Last: America’s War on the Poor. New York:

Penguin, 1998. Seiter, Ellen. “Children’s Desires/Mother’s Dilemmas: The Social Contexts of

Consumption.” In The Children’s Culture Reader, edited by Henry Jenkins. New York: New York University Press, 1998.

———. Television and New Media Audiences. Oxford: Oxford University Press, 1999. Snyder, Leslie B., et al. “Effects of Alcohol Advertising Exposure on Drinking Among

Youth.” Archives of Pediatric and Adolescent Medicine 160 (2006): 18–24. Spigel, Lynn. “Seducing the Innocent: Childhood and Television in Postwar America.” In

The Children’s Culture Reader, edited by Henry Jenkins. New York: New York University Press, 1998.

Springhall, John. Youth, Popular Culture, and Moral Panics: Penny Gaffs to GangstaRap, 1830–1996. New York: St. Martin’s Press, 1998.

Stacey, Judith. Brave New Families: Stories of Domestic Upheaval in Late-Twentieth- Century America. New York: Basic Books, 1990.

Steinberg, Shirley R., and Joe L. Kincheloe, eds. Kinderculture: The Corporate Construction of Childhood. Boulder, CO: Westview Press, 1998.

Sternheimer, Karen. “Do Video Games Kill?” Contexts 6, no. 1 (2007): 13–17. ———. “Hollywood Doesn’t Threaten Family Values.” Contexts 8, no. 4 (2008): 44–48. ———. Kids These Days: Facts and Fictions About Today’s Youth. Lanham, MD:


Rowman and Littlefield, 2006. ———. “A Media Literate Generation? Adolescents as Active, Critical Viewers: A Cultural

Studies Approach.” PhD diss., University of Southern California, 1998. Thorne, Barrie. Gender Play: Girls and Boys in School. New Brunswick, NJ: Rutgers

University Press, 1993. ———. “Re-visioning Women and Social Change: Where Are the Children?” Gender and

Society 1 (1987): 85–109. Tobin, Joseph. Good Guys Don’t Wear Hats: Children’s Talk About the Media. New York:

Teacher’s College Press, 2000. Vandewater, Elizabeth A., Mi-suk Shim, and Allison G. Caplovitz. “Linking Obesity and

Activity Level with Children’s Television and Video Game Use.” Journal of Adolescence 27, no. 1 (2004): 71–85.

Wertham, Frederic. “Such Trivia as Comic Books.” In The Children’s Culture Reader, edited by Henry Jenkins. New York: New York University Press, 1998.

Wilson, William Julius. More Than Just Race: Being Black and Poor in the Inner City. New York: W. W. Norton, 2009.

Winn, Marie. The Plug-in Drug: Television, Computers, and Family Life. 1977. Reprint, New York: Penguin, 2002.

Wooden, Wayne S., and Randy Blazak. Renegade Kids, Suburban Outlaws: From Youth Culture to Delinquency. 2nd ed. Belmont, CA: Wadsworth, 2001.

Woodhead, Martin. “Psychology and the Cultural Construction of Children’s Needs.” In Constructing and Reconstructing Childhood: Contemporary Issues in the Sociological Study of Childhood, edited by Allison James and Alan Prout. London: Falmer Press, 1997.

Wykes, Maggie, and Barrie Gunter. The Media and Body Image: If Looks Could Kill. London: Sage, 2005.

Zelizer, Viviana A. “Kids and Commerce.” Childhood 9 (2002): 375–396. ———. Pricing the Priceless Child: The Changing Social Value of Children. Princeton,

NJ: Princeton University Press, 1994.



Abortion, 36, 147, 150, 165, 175, 179 Academic achievement, 76–77, 82, 83, 88, 93 television and, 75–76, 77, 277 Adams, Terri M., 14 Adler, Patricia, 158–159 Adler, Peter, 158–159 Adolescence, 3, 24, 145–146 Advertising, 212, 222, 229, 238, 245, 260–261, 267 alcohol, 231, 232, 235 blaming, 255–258, 268 children and, 199, 200–201, 228, 247, 255, 256–257, 258, 260, 262 consumption and, 248, 249, 255, 258–262 education and, 262, 263 food, 198, 199–200 school, 262, 263, 264 Aftab, Parry, 48 Aggression, 12, 116, 119, 128, 132 television and, 116, 117 video games and, 118, 120, 121 Aggressive and Violent Behavior (journal), 117, 121 Alcohol, 22, 152, 236, 239 drinking, 220, 229, 230, 233, 234, 235, 240 popular culture and, 229–235 Alcohol abuse, 219, 220, 230, 235, 238, 240, 280 Alcott, Louisa May, 71 Allen, Steve, 10 Altheide, David L., 15, 282–283 American Academy of Pediatrics (AAP), 80, 197 Anderson, Craig, 118, 119, 120 Anderson, Daniel R., 75–76, 77 Anderson, Elijah, 128, 130 Annie E. Casey Foundation, 94 Anorexia, 2, 197, 198, 203–212 Anxiety, 6, 38, 40, 50, 121 Arbuckle, Roscoe “Fatty,” 143 Archives of Pediatrics and Adolescent Medicine, 78, 82, 224, 231 Ariès, Phillipe, 29 Arthur, Bea, 177 Attention deficit hyperactivity disorder (ADHD), 77, 78


Australian Medical Association, restrictions by, 205 Autism, 79, 80

Bailey, Beth L., 147 Bakan, Joel, 21 Ball, Lucille, 144 Bandura, Albert: “Bobo doll” experiment by, 111 Barrett, Patrick: on advertising/children, 261 Bauerlein, Mark, 72 Behavior, 5, 103, 146, 149, 159, 160, 236 changes in, 148 consumer, 261 crossover, 159 media and, 273 risky, 154, 156, 240 scandalous, 142–143 sexual, 145, 153, 154, 156, 273 Bell, Daniel, 250 Bernstein, Nell, 280 Best, Joel, 12, 54 Beyoncé, 197 Billson, Janet Mancini, 128–129 Binder, Amy, 10 Birth control, 36, 142, 145–146, 150, 155, 165 access to, 147, 162, 278 availability of, 164, 186 learning about, 151 Birthrates, 118, 151, 152 unmarried, 177, 179, 180–184 Blazak, Randy, 109 Body dissatisfaction, 198, 208, 209, 210, 212 Body mass index (BMI), 200, 205, 206 Boston Globe, 21, 48, 69, 102, 114, 140, 197, 202 Boyer, Debra, 153 Brisman, Julissa, 61 Brokaw, Tom, 7 Brown, Linda, 91–92 Brown, Murphy, 179, 180 Brown v. Board of Education (1954), 91–92 Bryson, Bethany, 9 Buckingham, David, 26 Bulimia, 198, 203–212 Bullying, 13, 16, 54, 56–59, 64, 273 Bureau of Justice Statistics, 56 Bureau of Labor, 129, 279 Bush, George H. W., 176


Calvert, Karin, 29, 30 Celebrities, 190, 203, 208, 212, 215, 227 drug abuse by, 235–236 eating disorders and, 204, 206, 211 unmarried births and, 183–184 Center for Commercial-Free Public Schools, 263 Center for Research on Adolescent Health and Development, 234 Center for the New American Dream, 266–267 Center for Tobacco Control Research, 223 Centers for Disease Control and Prevention (CDC), 38, 149, 182, 202, 221, 225 study by, 55, 150, 153 Chamber of Fashion, restrictions by, 205 Channel One, 262, 264 Chaplin, Lan Nguyen, 266 Chicago Sun-Times, 72, 250, 251 Child labor, 28, 30–31, 33 Childhood best time for, 37–41 caricature of, 256 challenges of, 40–41 changes for, 2, 27–28, 28–32, 34, 39–40, 41, 273 creation of, 27, 29, 32–37 experiences of, 3, 4, 24, 25–26, 28, 39 fantasy and, 40, 123 innocence of, 37, 163, 249 meaning of, 22, 23–27, 37 media and, 27, 37, 40, 43 popular culture and, 21, 27, 28, 35 Children adults and, 29, 30, 33 autonomy of, 31 consumption and, 254 economic roles of, 252–253 influencing, 259–260 media and, 17, 25–26, 27, 37, 40, 41, 42, 43, 257 percent of/family income, 252 (fig.) popular culture and, 22, 23 poverty and, 14, 52, 252, 275 (fig.), 276–277 stereotypes about, 24, 26 success/popularity and, 257–258 Children’s Defense Fund (CDF), 213–214 Children’s Television Workshop, 77 Christian Science Monitor, 223, 224 Civil rights movement, 93, 281 Clark, Cindy Dell, 253 Clementi, Tyler, 51–52, 53 Clinton, Bill, 282


Clinton, Hillary Rodham, 53, 256 CNN, 2, 52, 56, 231, 239 Cohen, Stanley, 8 Coleman, Gary, 177 College Board, 266 Columbine High School shooting, 8–9, 102, 110 Combs, Sean “Diddy,” 220 Communication, 1–2, 17, 48, 148, 152 electronic, 49, 63, 64, 74, 83–84 Compulsive dieters/eaters, 209 Computers, 95, 201, 247, 264 education and, 85–86 Comstock laws, 142, 186 Condoms, using, 150, 156, 163 Constantin, Norman, 234 Consumerism, 33, 246, 247, 253–254, 255, 256 advertising and, 248, 249 children and, 252, 257, 265 critical, 265–268 social movements and, 265–268 Consumption, 22–23, 210, 254, 267, 285 advertising and, 255, 258–262 culture of, 249–250, 255, 264, 265, 266 social problems and, 262–265 Contraception, 151, 153, 186 Coontz, Stephanie, 188 Cooper, Cynthia, 12 Council of Fashion Designers of America, 205 Crawford, Garry, 122 Creating Fear: News and the Construction of Crisis (Altheide), 15, 282–283 Crime, 6, 61, 115, 122, 234, 280 adult, 105–106, 107 committing, 107, 109 Culture, 9, 32, 74 changes in, 47, 175, 228 children’s, 253, 254 consumerism and, 265 economy and, 266 generational differences and, 28 junk, 21 problems with, 274 teen pregnancy and, 279 youth, 32–33, 283 Cumberbatch, Guy, 120, 121 Cyber predators, 49, 59–61 Cyberbullying, 48, 49, 50–59, 65 Cyberreality, 61–63


Cyrus, Miley, 139, 220

Dating, 145–146, 147, 158 DeGeneres, Ellen, 52 Depression, 50, 60, 276 Dieting, 197, 207, 209, 279–280 Dill, Karen, 118, 119, 120 Divorce, 35, 36, 178 (fig.), 182, 184, 186, 187–188, 193 data on, 189, 192 early pregnancy and, 279 increase in, 177, 185, 187 labor force and, 185 Douglas, Susan J., 162, 208 Drew, Lori, 60 Drinking binge, 229, 230 driving and, 38, 233 moderate, 234, 235 popular culture and, 16, 220–221 teen, 38, 231–233, 234 television and, 230–231 video games and, 230–231 Dropout rates, 74, 277, 278 Drug abuse, 39, 219, 220, 280 education and, 238, 239–240, 241 ethnicity/race and, 238, 240 Internet and, 238, 239 movies and, 235 music and, 240 popular culture and, 16, 235–239 television and, 235 Drug Abuse Warning Network (DAWN), 238 Drugs, 22, 111, 112, 148, 280 idealized behavior and, 236 sex and, 152 using, 237–238, 239 Duff, Hilary, 197

Eating disorders, popular culture and, 16, 203–212, 273 Economic changes, 5, 13, 23, 142, 166, 175, 184, 191 Economic growth, 28, 33, 36, 146–147, 189, 248, 265 Economic issues, 25, 28, 40, 93, 129, 187, 192, 193, 249, 267, 277–281 Education, 15, 23, 31–32, 77, 89, 160, 189, 245, 266, 280, 286 abstinence, 148, 151, 152, 162 advertising and, 262, 263 alcohol abuse and, 240 computers and, 85–86


drug abuse and, 238, 239–240, 241 increase in, 90, 91, 274 lack of, 4, 28, 108 media and, 91, 95, 285 parental, 117, 225 popular culture and, 16, 72–73, 74, 91, 273 poverty and, 4, 91 public, 252, 262, 264, 274 sex, 148, 278, 279 smoking and, 226, 240 social structure and, 91–95 television and, 76, 80, 81, 91, 95 Einstein, Albert, 86 Environmental Protection Agency (EPA), 264, 265 Equal Credit Opportunity Act (1974), 186 Estrada, Emir, 253 Ethnicity, 25, 226 alcohol abuse and, 240 drug abuse and, 238 obesity and, 198, 202, 280 poverty and, 181 Eunick, Tiffany, 112, 113, 114 Evangelical Christians, 175 divorce rates and, 187 pregnancy and, 278–279

Facebook, 18, 47, 48, 50, 57, 60, 63, 140, 203, 220 Fair Housing Act (1968), 92 Families changes in, 40, 184–193 divorced, 177 extended, 185 low-income, 94, 276 representation of, 176–177, 179 television, 175–177 Families and Work Institute, 250 Family disruption, 36, 92, 134, 219, 287 Family law, illegitimacy and, 187 Fashion industry, 203, 204, 206, 208 Fear, 4–5, 5–10, 40, 49, 52, 110, 145, 158 dealing with, 121, 122, 283 media, 4, 6, 10, 16, 17, 23, 50, 84–85, 124, 282, 283, 286 violence and, 104, 128, 130, 131 Federal Bureau of Investigation (FBI), 106, 107, 133 Federal Trade Commission (FTC), 199 Fellini, Federico, 144 Feminism, 162, 186, 208, 211


50 Cent, 72 Fine, David, 153 Fiske, John, 42 Flynn, James R., 88 Food and Drug Administration (FDA), 222 Food insecurity, 276 Foods cheap, 279–280 junk, 197, 201, 263 sugary/fattening, 198, 199 Formanek-Brunell, Miriam, 30 Frampton, Ian, 206 Freeman, Jonathan L.: media/violence and, 103 Freud, Sigmund, 34, 147 Friedan, Betty, 185 Fuller, Douglas B., 14

Galinsky, Ellen, 250 Gays and lesbians, 51, 53, 175 Gender, 13, 17, 34, 158, 162, 163, 208 meaning of, 161, 165, 167 popular culture and, 161, 283 social construction of, 161–166, 175, 240, 284 General Social Survey, 9, 156, 189 Gerbner, George, 130 Gerson, Michael, 84 Gitlin, Todd, 126 Glantz, Stanton, 223 Glassner, Barry, 13 Godard, Jean Luc, 144 Gore, Tipper, 7 Gortmaker, Steven, 199, 202 Gosling, Victoria, 122 Graduation rates, 38, 90–91, 182 Griswold v. Connecticut (1965), 146, 186 Grossman, David, 110 Gunter, Barrie, 212 Guttmacher Institute Report, 151

Harassment, 50, 52, 53, 54, 59, 65 Harkin, Tom, 200 Harris Interactive Poll, 89–90, 246 Hart, Betsy, 250, 251 Hartley, John, 42 Harvard School of Public Health, 199, 277 Hayes, Donald P., 88 Hays, Will, 143


Hays Office, 143 Health care, 3, 41, 214, 220, 276, 280 availability of, 198, 212 bankruptcies and, 213 obesity and, 203 Health insurance, 64, 213–214, 282 Health issues, 189, 212–215, 221, 287 Hegemonic masculinity, 128, 129, 151, 208 Hilton, Perez, 52 Hinduja, Sameer, 57 Hogan, Hulk, 112 Holmes, Malcolm D., 222 Homicides, 55, 62, 111–112, 125 by age, 106 (fig.) arrest rate for, 106–107, 109 decrease in, 61, 104, 105 (fig.) increase in, 105 Homophobia, 16, 51–52, 53, 54, 209, 273, 283, 287 Hondagneu-Sotelo, Pierrette, 253 Huffington Post, 52, 59, 72 Huxley, Aldous, 73 Hyperconsumption, 247, 248, 249, 262, 267, 285

Identity, 159, 190, 254 Imitation hypothesis, 112, 113, 114 Income, 31, 240 diet and, 279–280 Inconvenient Truth, An (film), 267 Inequality, 74, 275, 286 gender, 15, 160, 281 popular culture and, 17, 277, 281 poverty and, 282 racial, 15, 93, 94, 193, 281, 284 Ingraham, Chrys, 190 Innocence, 26, 27, 36, 37, 163 Intelligence quotient (IQ) scores, 76, 88, 89 Internet, 2, 50, 51, 58, 71, 73, 86 access to, 8, 9, 94–95, 262 addiction, 50, 279 children and, 49 criticism of, 42 drug abuse and, 238, 239 harm from, 39, 49 homophobia and, 53 low-income families and, 94–95 minorities and, 94 sex and, 141


using, 47–48, 59 violence and, 61, 108 It Gets Better Project, 53

Jenkins, Henry, 35 John, Deborah Roedder, 257, 266 Johnson, Dwayne “the Rock,” 112 Johnson, Steven, 81, 83 Jolie, Angelina, 183–184 Jones, Gerard, 110

Kaiser Family Foundation (KFF), 80, 85, 94, 154, 156, 198, 201 Keister, Lisa A., 279 Kennedy, John F., 71 Kilbourne, Jean, 256, 260 Kincheloe, Joe, 26 King, Rodney, 179 Kitsuse, John, 11 Kitzinger, Jenny, 26 Kozakiewicz, Alicia, 61

Labor force divorce and, 185 women and, 185, 186, 189 Language, 75, 84 Learning, 76, 81 Levine, Michael, 206 Levy, Louise, 187 Levy v. Louisiana (1968), 187 LGBT youth cyberbullying and, 53–56 suicide and, 54, 55 Literacy, 6, 10, 39, 77, 84, 89, 91 Loe, Meika, 239 Los Angeles Times, 10, 84, 102, 115, 206

Major, Richard, 128–129 Males, Mike A., 150, 152, 222 Mammy myth, 14 Mander, Jerry, 74, 75, 95 Marijuana, 235, 236, 237, 240 Marketing, 33, 259, 262, 263 Marriage, 36, 37, 143, 164, 175 changes in, 184, 193 children outside, 182–183, 191 data on, 191–192 early, 147, 182, 279


as economic arrangement, 188–189, 191 fantasy of, 190–193 gays/lesbians and, 192 meaning of, 187–190 movies and, 190–191 popular culture and, 190, 192, 193 pregnancy and, 36, 279 unhappy, 186, 187–188, 189 Marsden, Andrew, 261 Massachusetts Public Interest Research Group, 227 Materialism, 2, 43, 246, 247, 249, 256 advertising and, 268 impact of, 248, 251, 262, 267 popular culture and, 16, 273 McLorg, Penelope A., 209 Media analysis of, 283, 284 blaming, 2, 3, 41–43, 48, 114, 115, 161, 275, 281–283, 285 changes for, 47, 64, 152 children and, 17, 26, 27, 37, 40, 41, 42, 43, 257 focus on, 11, 12–13, 133 impact of, 3, 48 minding, 49, 81–91, 257 problems with, 11, 274 violence and, 18, 102, 104, 106, 108, 109, 112–116, 121–127, 129, 130–134, 275 Media culture, 42, 274–275, 285–287 changes for, 2, 4 expansion of, 104–109 fear of, 1–2, 4–5, 8, 17, 165 interests/identities and, 43 pervasiveness of, 12, 18 production of, 274 Media phobia, 10–17, 274, 275, 284 Medved, Michael, 184, 190–191 Meier, Megan, 60 Mental health, 56, 64, 65, 205, 210 Miller, Terry, 53 Minick, Haley, 261 Miscegenation laws, 165–166 Models, 208, 210, 212, 215 anorexia/bulimia and, 204–206, 207 Monitoring the Future (MTF), 221, 233–234, 235 Monroe, Marilyn, 210 Moral panics, 8–9, 16, 23, 47, 104, 105, 133, 273 Movies, 101, 111, 123, 124, 131, 226, 286 alcohol and, 235 censorship of, 143


criticism of, 42, 115 divorce and, 175 drug abuse and, 235 marriage and, 190–191 sex and, 144, 160 smoking and, 222, 223, 224, 225, 227 violence and, 10, 17, 103, 110, 126 Moynihan, Daniel Patrick, 14 Murray, John, 123 Music, 7, 18, 76, 101, 286 criticism of, 9, 10, 42, 115 drug abuse and, 240 violence and, 9, 110 My Space, cyberbullying and, 60

Nasaw, David, 31 National Assembly, extreme thinness and, 205 National Campaign to Prevent Teen Pregnancy, 149–150 National Center for Children in Poverty, 276 National Center for Education Statistics (NCES), 89 National Crime Prevention Council, 57 National Crime Victimization Survey (NCVS), 14, 62 National Endowment for the Arts (NEA), 89, 90 National Institute of Mental Health, 204 National Institute on Alcohol Abuse and Alcoholism, 230 National Institute on Drug Abuse, 220 National Institutes of Health (NIH), 58 National Science Foundation, 60 National Survey on Drug Use and Health (NSDUH), 229, 233 National Swedish Public Health Institute, 119 New York Times, 21, 48, 52, 84, 164, 206, 234 on obesity/television, 200, 201 on video games, 10 No Child Left Behind (NCLB), 93

Obama, Barack, 53 Obesity, 2, 4, 211–212, 276 causes of, 202–203 complexity of, 201–202 ethnicity and, 198, 202, 280 media and, 275 popular culture and, 16, 197, 198–203 poverty and, 198, 202–203, 208–209, 280 race and, 198, 202, 208–209, 280 television and, 197, 198, 200, 201–202, 203 Occupational Health and Safety Administration (OSHA), 59 Office of Juvenile Justice and Delinquency Prevention Program, 62


Palin, Bristol, 173–174 Palin, Sarah, 173 Palladino, Grace, 7, 32–33 Parental control, 35, 48, 146, 148, 233 Parenting, 36, 173, 182, 202, 250, 277 Parents Music Resource Center, 7 Partnership for a Drug-Free America, 236 Patchin, Justin W., 57 Patterson, E. Britt, 108 Pediatrics, 79, 82, 154, 230 teen smoking and, 223, 224 on television, 77–78 Pew Internet and American Life Project, 57, 62, 84, 149 Pew Research Center, 56–57, 189, 192 Piaget, Jean, 257 Pitt, Brad, 71, 183–184 Popular culture, 2, 43, 125, 184, 239 childhood and, 21, 27, 28, 35, 42, 283 consumption of, 33, 42 criticism of, 8, 9, 11, 12, 15, 16, 22, 72, 115, 128, 175, 204, 273, 274, 277 diversions of, 277–281 fear of, 5–6, 11, 274 focus on, 19, 41, 215, 221, 222–223 impact of, 2, 3, 71, 159 lessons from, 283–285 Porter, Cole, 236 Postman, Neil, 73 Poverty, 36, 41, 152, 162, 183, 226, 251, 274 children and, 14, 52, 252, 275 (fig.), 276–277 culture of, 267 cycle of, 15, 91, 181 education and, 4, 91 ethnicity and, 181 obesity and, 198, 202–203, 208–209, 280 popular culture and, 3–5, 198, 277 problem with, 275–277, 276, 282 race and, 181, 279 talking about, 281, 282 teen pregnancy and, 278 teen sex and, 152, 153 violence and, 107, 108, 126–132, 134 Power, 17, 160, 162, 167, 188, 208, 246, 284 Pregnancy, 118 concerns about, 166, 173 divorce and, 279 learning about, 155 marriage and, 36, 279


unplanned, 164, 179 Presley, Elvis, 7 Prince, panic over, 7 Promiscuity, 2, 16, 37, 139, 140, 166, 286 teen, 140, 148–153 Psychological health, 34, 108, 117, 123 Public health advocates, 198, 200, 203, 209 Public Health Institute, 234

Quayle, Dan, 179

Race, 13, 17, 91, 162, 226 alcohol abuse and, 240 obesity and, 198, 202, 208–209, 280 poverty and, 181, 279 smoking and, 240 teen sex and, 153 violence and, 126–132 Racism, 16, 18, 209, 281, 283, 287 Rap, 9–10, 14, 128, 130 Rape, 14, 105, 140, 153, 162, 164 Ravi, Dharun, 51–52 Reading, 77, 82, 86, 88, 89, 90 television and, 75, 76 Reagan, Ronald, 14 Religion, 211, 286 gender inequality, 160 wealth and, 279 Rich-Edwards, Janet, 278 Rodemeyer, Jamey, 52 Roffman, Deborah, 140, 141 Rose, Tricia, 9

Sampson, Robert, 277 Savage, Dan, 53 Scalia, Antonin, 101 SAT scores, 86–87, 87 (fig.), 88, 89, 91 School performance, 15, 75, 82, 225, 275 Schools, 5, 32, 72, 91, 282 advertising in, 262, 263, 264 corporate influence in, 263, 264 dropout rates and, 277, 278 Schor, Juliet B., 257 Schudson, Michael, 228 Schwartz, Hillel, 210 Schwarzenegger, Arnold, 101 Scott, Derek, 121


Securities and Exchange Commission (SEC), 60 Segregation, 7, 92, 93, 276, 277, 281 Seiter, Ellen, 41, 253 Sex, 13, 22, 149, 186 attitudes about, 141–142 engaging in, 139, 150, 151, 165 information about, 26, 142, 147, 148, 155, 162–163 media and, 18, 36, 141, 142, 144, 153–161, 162, 164, 166, 167 popular culture and, 139, 140, 142, 144, 150, 154, 158, 160, 161, 166–167, 273 poverty and, 152 social construction of, 144, 161–166 talking about, 148, 156–157, 160 television and, 141, 144, 154–155, 160, 166 Sexism, 14, 16, 162, 281, 284, 287 Sexual abuse, 50, 153, 163, 209 Sexual images, 17, 142, 156, 162 Sexual orientation, 13, 52, 53, 55, 281 Sexual revolution popular culture and, 142–148 social structure and, 142–148 Sexuality, 148, 163, 164 changing attitudes about, 40, 144 importance of, 158–159 minorities and, 165–166 popular culture and, 283 representations of, 166, 167, 284 talking about, 148, 157, 161 Sexually transmitted diseases (STDs), 152, 156, 163 Shakespeare, William, 72 Singer, Dorothy, 124 Single parenthood, 35, 174–175, 192, 193, 274, 279 celebrity, 4, 183 ethnicity and, 184 glamorizing, 175, 179 popular culture and, 16, 273 race and, 184 socioeconomic status and, 184 Skinner, B. F., 110 Slavery, 14, 30, 36, 91, 93, 166 Smoking, 219, 230, 238 advertising and, 222 education and, 226, 240 ethnicity and, 240 health care costs from, 219–220 movies and, 222, 223, 224, 225, 227 popular culture and, 16, 220–221, 221–229 poverty and, 226


race and, 240 television and, 226–227 Social changes, 23, 40, 142, 161 Social life, 17, 21, 42, 229 Social media, 8, 47–48, 148 Social movements, consumerism and, 265–268 Social networking, 2, 17, 18, 42, 47, 53, 58, 62–63, 64 Social order, 9–10, 132, 281 Social problems, 15, 22, 36, 41, 73, 103 constructionist approach to, 11–12 consumption and, 249, 262–265 media and, 16, 17, 42, 43, 115, 285, 286 popular culture and, 2–3, 4, 13, 17, 144, 285 Social structure, 13, 15, 81, 133, 268, 274, 275, 276, 281 child consumers and, 249–255 education and, 91–95 sexual revolution and, 142–148 substance use and, 239–241 Socioeconomic status, 13, 91, 184, 203, 226, 280 Spears, Jamie Lynn, 174 Spector, Malcolm, 11 Spock, Benjamin, 147 Springhall, John, 6 Status, 158, 161, 246 Stereotypes, 24, 26, 131, 162 Steroids, 208, 240 Stop Commercial Exploitation of Children, 255 Storytelling, 13, 126 Strangers, 49, 61, 63 Strober, Michael, 206 Structural conditions, 15, 73, 134, 142, 162, 175, 267, 268 Student loans, 245, 266 Substance abuse, 219, 230, 280, 287 popular culture and, 220–221, 240–241, 273 social structure and, 239–241 Substance Abuse and Mental Health Services Administration (SAMHSA), 64, 219, 237 Suicide, 38, 48, 50, 51, 64, 149, 273 cyberbullying and, 53–56, 58, 59, 65

Tate, Lionel, 112, 113–114 Taub, Diane E., 209 Taylor, Shawn, 245 Technology, 2, 39, 42, 48, 63, 86, 247, 283 Teen birthrates, 150, 151, 152, 181, 181 (fig.), 183 decrease in, 180 increase in, 182 Teen Mom, 173–174, 180–184


Teen pregnancy, 4, 36, 41, 118, 147, 156, 174–175, 180–184, 277, 286 culture and, 279 decrease in, 274 ethnicity/race and, 184 evangelical teens and, 278–279 media and, 275 popular culture and, 175, 180, 273 poverty and, 278 predictors of, 175, 181, 193 socioeconomic status and, 184 Teen sexuality, 35, 145–146, 148, 149, 150 perception of, 36–37 popular culture and, 152, 154 Teens adult sexual involvement with, 152, 153 childhood/adulthood and, 37 focusing on, 222 parental supervision and, 34–35 popular culture and, 165, 222 as sexual objects, 164–165 suicide and, 55 Telecommunications Act (1996), 19, 274 Television, 8, 36, 71, 73, 123, 158, 198, 212, 214–215, 246 academic achievement and, 76, 77, 277 adolescence and, 141 adults and, 141 communication and, 1–2 criticism of, 42, 74–75, 115, 203 dieting and, 197 drinking and, 230–231 drug abuse and, 235 eating disorders and, 207 education and, 76, 77, 80, 81, 91, 95 imitating, 111, 112 impact of, 78–80 learning and, 76 movies on, 144 obesity and, 197, 198, 200, 201–202, 203 preschoolers and, 75–76 real families and, 175–177, 179 sex and, 141, 144, 154–155, 160, 166 smoking and, 226–227 violence and, 12, 110, 116, 121, 123, 125, 126, 129, 131 watching, 74, 77–81, 95, 199, 202, 276 Terrorism, 19, 40, 48 Texting, 18, 63, 74, 84 Thinness, 204, 209, 210, 240


Thompson, Becky Wangsgaard, 209 Thorne, Barrie, 159 Time, 79, 85, 120, 174, 247, 251 Tobacco industry, 221, 222, 223, 227 Tuggle, Justin L., 222 Turdo, Robin Maria, 228

Unmarried teens, percent teenage births to, 181 (fig.) Unmarried women, 185 birthrate for, 177, 179, 180–184 US Census Bureau, 90, 183, 191, 202–203, 251 US Department of Education, 87 US Department of Justice, 14 US Supreme Court, 54, 91, 101, 102, 146, 186 USA Today, 48, 115, 231, 236, 238

Valentino, Rudolph, 142 Victimization, 4, 104, 128 Video games, 2, 4, 8, 10, 21, 80, 104, 115, 133, 198, 286 academic achievement and, 277 aggression and, 118, 120, 121 criticism of, 42, 110 drinking and, 230–231 playing, 39, 76, 82, 83, 122, 133, 201, 202, 279 violent, 9, 17, 22, 101, 102–103, 108, 110, 114, 117–118, 119, 120, 121, 122, 123,

278, 282 Violence, 10, 40, 41, 52, 286 aggression and, 120, 132 causes of, 108, 282, 285 challenging, 132–134 committing, 13, 107, 108, 118, 122, 126 decline of, 104–109 fear and, 104, 128, 130, 131 imitating, 104, 109–114 intimate-partner, 14, 219, 278 juvenile, 41, 106, 107, 108, 126–127 meaning of, 122–126, 127, 131–132 media and, 18, 102, 104, 106, 108, 109, 112–116, 121–127, 129, 130–134, 275 media culture and, 103, 104, 108, 286 moral panics about, 104, 133 movies and, 10, 110, 126 neighborhood, 81, 92, 127, 131, 133 popular culture and, 8–9, 11, 13, 16, 101, 102–103, 109–110, 114, 117–118, 122, 123,

125, 128, 131–132, 133, 273, 283 poverty and, 107, 108, 126–132 race and, 126–132 real, 9, 108, 113, 115, 123, 124, 125, 127, 129, 131, 132–134


television and, 12, 102–103, 116, 117, 121, 123, 125, 126, 129, 131 video game, 9, 17, 22, 101, 102–103, 108, 110, 114, 117–118, 120, 121, 123, 278 Violent crime, drop in, 61, 105–106 Virginia Tech shooting, 102 Virginity pledges, 163 Vogue, model policy and, 206 Voice of America, 267

Walkerdine, Valerie, 163 Waller, Fats, 236 Washington Post, 48, 84, 85, 140, 231, 235 obesity/television and, 201 technology/culture and, 72 video game violence and, 102, 115 Washington Star, teen smoking and, 223 Washington Times, 83–84, 101 Weber, Max, 246 Wedding industrial complex, 190 Welfare, 14, 282 Wertham, Frederic, 7 Williams, Christine L., 252, 253 Wilson, William Julius, 277 Winerip, Michael, 234 Winn, Marie, 74, 75, 87 Wolfe, Michael F., 88 Wolfer, Loreen T., 88 Women objectification of, 17, 160, 198 social control of, 163–164 violence against, 14 Wooden, Wayne, 109 World Health Organization, 212 Wykes, Maggie, 212

Youth Risk Behavior Surveillance System (YRBSS), 149, 150

Zelizer, Viviana A., 31, 252–253


Title Page
1. Media Phobia: Why Blaming Pop Culture for Social Problems Is a Problem
2. Is Popular Culture Really Ruining Childhood?
3. Does Social Networking Kill? Cyberbullying, Homophobia, and Suicide
4. What’s Dumbing Down America: Media Zombies or Educational Disparities?
5. From Screen to Crime Scene: Media Violence and Real Violence
6. Pop Culture Promiscuity: Sexualized Images and Reality
7. Changing Families: As Seen on TV?
8. Media Health Hazards? Beauty Image, Obesity, and Eating Disorders
9. Does Pop Culture Promote Smoking, Toking, and Drinking?
10. Consumption and Materialism: A New Generation of Greed?
11. Beyond Popular Culture: Why Inequality Is the Problem
Selected Bibliography

Order now and get 10% discount on all orders above $50 now!!The professional are ready and willing handle your assignment.