June 29, 2010
I’ve been really torn about responding to this one–Online Bullies Pull Schools Into the Fray–though since the Times got 547 comments on it, I guess I better chime in….
The conundrum is how much schools can do to prevent and punish cyberbullying by text message, YouTube, or Facebook, that occurs off school grounds and outside of school hours. The article profiles Benjamin Franklin Middle School in New Jersey, and is accompanied by a chilling picture, which I can only hope was posed, of a small group of very young (like maybe 11 or 12) female students, standing in a semi-circle, not chatting or playing, but texting on Blackberries and other elaborate QWERTY-enabled phones.
It’s a hard one for me, because of course I’d like to side with the bullying victims and say that the school should do everything in its power to stop bullying. But in this case, “everything in its power” would mean an unacceptable intrusion into students’ lives outside of school (as we should’ve learned from the Pennsylvania case in which a school district was caught monitoring students in their own bedrooms via their laptop cameras).
And certainly, the schools create the kind of environment in which bullying of all kinds thrives, so it’s tempting to want to hold them responsible when the nastiness goes off school grounds.
But…why do middle school students have smart phones? Why do they have internet usage unmonitored by parents? Why do they have Facebook pages (against even Facebook’s terms of service, which bar anyone under 13) without their parents being their “friends” and therefore able to see everything that happens on the page? These circumstances are created by parents, and the responsibility of raising kids to be decent human beings is the responsibility of parents. It should go without saying, but apparently it doesn’t, that discipline for kids’ wrongdoing outside of school is the responsibility of their parents.
The article recounts a few different schools’ experiences with massively disruptive incidents born from online spats, and their attempts to smooth things out between students. Principal Tony Orsini laments that “All we are doing is reacting. We can’t seem to get ahead of the curve.”
No, Mr. Orsini, you can’t. Because you might have the legal designation of in loco parentis when it comes to restricting students’ civil rights and commanding their behavior in your building, but you can’t actually take the place of a parent. And you can’t teach compassion or respect or human decency in a basically authoritarian institution where children are considered as factory products instead of people.
In the end, Benjamin Franklin Middle School organizes a group of 8th grade girls to talk to the 6th graders about responsible technology use and cyberbullying. Actually, not such a bad idea, to enlist older students, who kids might be more likely to take seriously than their parents or teachers. The 6th graders seem intrigued. But unintentionally, it sort of highlights exactly what’s wrong with middle school environments that allows and encourages cyberbullying in the first place: the environment itself practically enforces immaturity. There are no younger kids to look after and protect, there are no older kids to look up to and emulate, and the only adults are ones whose job it is to control and manipulate you, so they’re not very compelling role models. And there’s nothing meaningful to do. Is it really any wonder at all that kids turn to petty technology-mediated aggression for entertainment and to make themselves feel bigger?
I wish schools would return to housing kindergartners through 6th graders together, and 7th graders through high school seniors together. My own school district back in Kansas City has recently gone even further in the wrong direction, separating 6th graders out from the middle schools and putting them in a designated “6th grade center.” I don’t even want to imagine what kind of psychodrama goes on in that place.
The article closes with a sweet sounding girl named Emily telling the younger kids that if they’re being cyberbullied, “Go to the school. The school will make it stop, immediately!” But Emily dear, most schools can barely control the bullying that goes on inside their walls. I’m glad your school at least takes it seriously.
There’s a saying in biology and genetics regarding the influence of both nature and nurture on outcome: “Genetics loads the gun, and environment pulls the trigger.” In the case of cyberbullying, parental negligence is loading the gun, and school environment is pulling the trigger. (The technology is only amplifying effects; there were predatory little girls a long time before we had Facebook.) But schools aren’t capable of dealing with the consequences; they cannot be expected to act as police, prosecutor, judge and jury when discipline matters cross over into stalking, harassment, and libel; and we should not invite them to have any more reach into students’ out of school lives than they already do. It’s like schools have scared parents into doubting their ability to do their own jobs. Parents need to stand up for their kids and take it back. And if they don’t understand the technology or how to control it, they need to learn…before they give a Blackberry to an 11-year-old.
June 26, 2010
I was tipped off to this amazing video blog project from the HERE Arts Center by a director friend. The MADE HERE Project is exploring, through video interviews with New York performing artists of all stripes, a broad range of the challenges and issues facing those working in the performing arts, and telling the stories of how a number of individual artists have responded to the problems that attend this line of work. Currently available issues of 3 episodes apiece are “Creative Real Estate,” on the issue of finding and keeping affordable space, and “Day & Night Jobs,” about survival jobs and the constant negotiation between finances and the erratic nature of performing arts work. Upcoming episodes this summer will include Activism, Technology, and Family Balance; I’m particularly looking forward to the last one.
I watched all the available episodes straight through. Things that struck me: the disassociation between a popular conception of the arts as being by and for the elite, and the realities of the artists profiled, most of whom have at least one additional job, or live or work in circumstances that many people would consider intolerable, having to constantly be scavenging for space or resources or paying work. The seeming lack of a real question as to whether they should be doing what they’re doing, whether they should make a living in some more stable way. And what looks like an almost complete inability to see or consider impossibility. There is only ever a question of how.
June 22, 2010
I’m actually about a week late on my response to this article. Sue me; I’ve been working on two separate productions this week.
In a June 11 article, “Long Road to Adulthood is Growing Longer,” the New York Times reports from the findings of researchers from the MacArthur Research Network on Transitions to Adulthood, purporting to show that young people are actually taking longer to reach adulthood these days.
The disservice that this article does to young adults begins in its second sentence, declaring that “a growing body of research shows that the real Peter Pans are not the boomers, but the generations that have followed.”
This is a careless characterization of 20- and 30-somethings that, on top of everything else we’re facing, we really don’t need, thanks anyway Patricia Cohen. Peter Pan, remember, deliberately determined to remain a child forever. He wasn’t hobbled in his quest for maturity and independence by a major economic collapse, spiraling higher education costs, untenable health care and housing costs, and a 20% unemployment/under-employment rate. He chose eternal childhood and a fantasy existence over what he saw as the drudgery and hypocrisy of the adult world.
Really, New York Times, does that sound like American young adults of today?
Once again, like the article from the Chicago Tribune I wrote about a couple weeks ago, the Times reporter takes at face value, without much critical inquiry, assumptions that fewer young adults meeting certain commonly accepted markers of maturity actually means that young adults are less mature (which, as I’ve said before, I might agree that they are, but for far different reasons). There’s no consideration for the possibility that perhaps, in the face of wildly altered circumstances from those in which our parents and grandparents came of age, young adults are simply making different choices. And what’s more, that those choices might be rational and well-informed.
Early in the article, Cohen cites as an incidental example of adulthood taking longer to take hold the provision of the new health care law which allows adult children to stay on their parents’ health insurance policies up to age 26. But this is no indication whatsoever that young adults are less adult, but rather that wages have been stagnant for 30 years while insurance premiums have spiraled out of range of what a college graduate with an entry level or hourly wage job can reasonably afford. How does that reflect on our maturity, rather than on the unfairness and irrationality of our haphazard health care delivery systems?
Frank F. Furstenberg, leader of the MacArthur Research Network on Transitions to Adulthood, is quoted as saying “A new period of life is emerging in which young people are no longer adolescents but not yet adults.”
Is it that we’re not yet adults, or that young adulthood now holds different challenges (among which I would include an educational system that actually discourages rather than encourages maturity and independence) and more choices than it used to? Most of the anecdotes contained in the article are about young women taking longer to complete their education, and thus marrying much later and delaying childbearing. But, among other factors, birth control is legal now. It was once assumed that young adult women would marry and have children by their early 20′s, simply because they didn’t have many other options. That wasn’t better for their personal maturity, to do what society assumed they would because they lacked compelling career options; it was far worse. (If you haven’t read it, Betty Friedan makes the case eloquently in The Feminine Mystique, a book I enjoyed and identified with far more than I thought I would.) But the Times article laments that according to a study out from Princeton, “Marriage and parenthood–once seen as prerequisites for adulthood–are now viewed more as lifestyle choices.”
I’m sorry, a “lifestyle choice” is whether to live on the East Side or West Side, to drive or bike to work, to cook at home or eat out, to get a cat or a dog. To marry and have children are serious, life-altering choices involving the fates of at least two other people. It’s not that I wouldn’t like to do those things, but to do either right now would completely upend my career pursuits and independence.
Indeed, I don’t see anyone among the people profiled in the article who are Peter Pans–deliberate children–but rather people whose careers or economic circumstances necessitated further education, whose life trajectories simply didn’t take the courses they assumed they would (and how many people’s actually do?), and in the cases of the marriage-delayers, who honestly took their feelings into account in wisely not rushing into marriage at 23. When did honest self-reflection in delaying a major life decision become lack of maturity?
Things get more interesting on the website of the newsletter for the MacArthur Foundation Research Network on Transitions to Adulthood. It seems that much of this group’s current research agenda is based on a vision of the past that, well, isn’t all that true. One of the postulates shaping their body of research is stated as being that
The time period between age 18 and 34 has changed dramatically in the past several decades. Where once young adults moved in lockstep progression through the stages of adulthood—graduating from high school, leaving home, going to college or getting a job, marrying, and starting a family—today this path is no longer ordered and sequential.
Sure, it’s true that young adulthood has changed dramatically in the past few decades. But the assumption and ordering of most of these rites of passage are incredibly skewed towards the middle and upper classes of the mid- to late 20th century. This sequence was far from the lockstep norm for most people, for most of American history. Most states had no compulsory education through the end of the 19th century; it was not assumed that most people would graduate from, or even attend, high school. There has never been a time when most people graduated from or even attended college. Prior to the industrial revolution, many people never “got a job,” but learned a trade through an apprenticeship to a family member or neighbor, or inherited a family business or a farm. It wasn’t the lockstep following of these steps that used to make people adults by their late teens; it was the fact that they’d had to do things for themselves their entire lives.
Public awareness and social policies have not yet caught up to the changes. Many features of U.S. society operate on the assumption that reaching adulthood occurs much earlier than it ordinarily does today.
But how is it established that adulthood is actually occurring later? I agree that many social policies and institutions don’t serve young adults well, but that it’s because they presume a level of economic enfranchisement that’s out of reach for young graduates in the current job market.
The website criticizes earlier media portrayals of “Twixsters” and “Adultolescents,” and says that the Research Network “takes young people seriously.” But that is not the implication of what its leader, Mr. Furstenberg, said to the New York Times: “We have not developed and strengthened institutions to serve young adults, because we’re still living with the archaic idea that people enter adulthood in their late teens or early twenties.”
But I don’t see any likelihood that this view is going to result in young adults being taken more seriously, rather than less. If we’re shaping policies and institutions on a new paradigm that 18-34 year-olds (yep, that’s the age range given for this new period of young adulthood on the website) are in fact not effective adults, how does that not lead to taking them less seriously, as able to be self-directing and fully engaged in and responsible for their own lives, choices, and contributions to society and democracy? What Furstenberg blindly and rather deceptively confuses, as the Chicago Tribune article did, is personal maturity and capacity for independence with achievement of economic and material goals.
If it’s an archaic idea that we become adults in our late teens or early twenties, then I think we need to start looking backwards for guidance on how to take young adults seriously. Because this forward-looking vision of Mr. Furstenberg’s of recognizing ever less-adult “adults” is not going to help us establish realistic ways to help people reach self-sufficient adulthood in a timely manner. It will further justify the social exclusion of young adults from full enfranchisement and economic participation. It asks us to further infantilize young adults rather than seriously considering remedies to the economic circumstances that make full independence so difficult. This view itself is helping to lengthen the road to adulthood, not providing solutions for young adults who are seeking greater independence.
June 16, 2010
From the New York Times this evening, “The End of the Best Friend,” about school officials attempting to regulate the closeness of elementary school-aged best friend pairs–and curtail it if they see fit. The article quotes Christine Laycob, a director of counseling at St. Louis Country Day School:
“Parents sometimes say Johnny needs that one special friend. We say he doesn’t need a best friend.”
No one who would say such a thing should be allowed around children.
June 14, 2010
In sad news this weekend, Brooks of Sheffield has announced that he’ll be ending his Lost City blog, which chronicled the loss–and occasional preservation–of historic New York City establishments and architecture, after almost five years, of which I had only become a regular reader in the past few months.
Much as I enjoyed this fascinating blog, the need for it was always a melancholy thing in itself, necessitated by the constant loss of New York the way it used to be, not just the danger (which I won’t mourn) and grit, but appreciation for beauty and grace rather than just functionalism and profit–which I can’t say I ever really knew myself, having only moved here about 6 years ago this summer, but even so, I can feel the difference between the first time I visited and now. The proliferation of bank branches, chain drugstores, and Starbucks is…disheartening. St. Mark’s Place looks like a tourist trap, a theme park version of the countercultural haven it used to be. Brand new giant glass luxury condo towers blight the skylines of my own neighborhood–crassly discordant and out of context in this largely Hispanic working class neighborhood of townhouses and pre-war walk-ups bordered by three parks. And don’t even get me started on Williamsburg. I don’t consider myself a privileged young transient or a long-term tourist here; I’d like to stay for many more years, and yet I worry seriously about the continuing affordability of doing so (not that I know where else I’d be able to live and still earn a living) if something doesn’t change drastically.
What makes me sadder, and more anxious, is when Brooks says in his farewell letter that in five years of keeping the blog, he doesn’t feel that his writing has made any real difference in the course of events. That makes me wonder what kind of chance writers, artists, and agitators of all kinds really have when up against the basically unlimited money and power of business interests and callous and opportunistic “leaders.”
Still, I’m not quite as pessimistic as Brooks. Change is really the only constant; old New York is worth mourning, but nothing, nowhere, ever gets to stay the same. That doesn’t mean that things only get worse. Last year we got the High Line Park (an abandoned elevated train track converted into a floating garden walk), and next year we’re scheduled to get the High Bridge pedestrian promenade back. My neighborhood (which shall remain unnamed) has also seen the opening of several eclectic small cafés and bars, and the best cheapo Chinese takeout place ever, in previously empty or ramshackle corner storefronts, not just the monstrous luxury condos.
I went walking in Central Park at night for the first time last week…and that’s not the death/robbery wish that it used to be. I am pleased to report that no harm came to me. Teenagers, college students and hostel visitors were clumped in the grass around the pond with picnic blankets, guitars and open containers. A whole tent city of children was camped out on the Great Hill. I saw a night heron swoop silently across the water to a ragged old willow tree that seems to have served as home base to several different birds of prey the last few years, and the first fireflies coming out.
In no way do I intend to try to take over where Lost City is leaving off, but I will continue, in my own way, to document and share my own personal and favorite aspects of the city–the corners of New York which still remain magical, genuine, cryptic, romantic, obscure, unfinished and unpolished. And even now there is no shortage of them.
June 10, 2010
I collect a lot of links to other blogs and sites I enjoy, but I wanted to call special attention to one today, the Gulf Oil Blog, by Dr. Samantha Joye from the University of Georgia. I found this blog in response to a commenter from the “Real nerd girls” post, who, after I mentioned the women working in the Gulf to respond to the BP oil spill, wondered who they were. You may have seen Dr. Joye’s name in the news recently; she’s the leader of a research team tracking and sampling one of the two giant underwater oil plumes. (I went to UGA but did not know Dr. Joye personally…and didn’t do so well in Marine Biology, so I’m all the more amazed by her work right now.)
It’s a really beautiful blog, with both horrifying and beautiful photographs of what the team is seeing in the Gulf, and some much more detailed discussion than what you’d find in the mainstream news of what kind of science is being done on the plumes. Reading about the aspects of the spill that the team is studying, it’s stunning to realize how little we really know about deep sea ecosystems, the biochemistry of what’s happening, and the possible long-term impact of a spill like this, and how important this knowledge will be to protecting our world going forward. The conditions of the spill are truly unprecedented, and this could, hopefully, be a once-in-a-lifetime opportunity to gain the knowledge that Dr. Joye’s team is analyzing from their data right now.
I need to get some frustrations out of my system:
1. To board train: when doors open, let the people get off, and then get the f*ck on the train. Do not look around to inspect train before getting on. Do not stand right inside the door carefully deciding where to sit. Get the hell on the train and sit down.
1a. Do not board the train, stop right inside the door, and plant your feet in the “wide stance.” Nobody appointed you door guard or subway bouncer.
1b. If the train is crowded, and there are empty seats, sit in them. There are people still trying to get on behind you. You’re not being courteous by not sitting; you’re blocking access to both seats and aisle space for other people.
2. To exit train: when doors open, get the hell off the train.
3. If you must stand: hold on to the pole with your hand. Not your entire back.
4. Tourists: when the doors close, the train is about to move. Hold on to something, or you will fall, most likely onto me. Stop it.
5. Parents: please prevent your small children from swinging around or climbing on the poles. Besides that it’s rude to other passengers who don’t necessarily enjoy small children swinging in our faces, or who might actually need to use the aisle, there’s this thing called momentum. The train can stop suddenly and unsecured objects, like your child, will fly about 10 feet on the way to cracking his head open.
6. Men (usually): You know what I’m gonna say. Close your legs. Seriously. You don’t have anything that requires taking up two and a half to three seats. This is particularly addressed to the guy who spent a whole subway ride, on a crowded train, giving me dirty looks and huffing because my knee was touching yours, when my legs were crossed and yours were wide open.
7. Everyone: Just learn how to read the subway map. I understand that it has lots of different colors on it, but it truly, truly isn’t that complex. (I exempt from chastisement tourists who don’t speak or read any English who make the sad mistake of trying to get anywhere, but particularly Brooklyn, on a Sunday, because there is no way you could’ve known better, and there is truly no comprehensible information anywhere in any language about how the C train is going over the F line, or what the fact of there being no uptown local trains will do to your trip, or whatever the f*ck is going on this weekend. I’m sorry. Good luck.)
ADDENDUM, 6/9/10: For exiting the subway station in the rain: Do not clump at the top of the stairs wondering whether you really ought to exit to the sidewalk. Just do it. It’s just water. The quicker you walk out into it, the quicker we’ll all get to be home.
June 7, 2010
Whenever something starts falling apart again in our deathtrap of a building, people invariably tell us that we need to just move. And we seriously consider it…we do.
But then we look out our front door over Central Park, which is right across the street.
Twenty of the entrances, or “gates,” into Central Park have names, in honor of various groups, professions or guilds and their contribution to the city’s history and prosperity. I’ve always found it personally appropriate that the nearest gate to me is “Stranger’s Gate.”
Every year when TimeOutNY or New York Magazine or someone runs their yearly “best neighborhoods in NYC” feature, we cross our fingers that they haven’t thought to mention ours…because once a neighborhood is “discovered,” it’s all downhill from there…but usually, thankfully, they haven’t, and we breathe another sigh of relief.
And every time I go for a walk in the evening in the springtime, I remember again why I feel so lucky to get to live here.
Last night I walked, as I often do, up Morningside Drive, down Broadway and then over to Amsterdam Ave., basically circling Columbia University and the Cathedral of St. John the Divine.
This is my favorite angel of the cathedral, on the east end overlooking Morningside Drive. I like him standing guard quietly in the shadows, behind the trees, in contrast to Michael, who’s slaying Satan out front in the sunshine.
June 2, 2010
Jezebel ran this today. A casting call is out for a new “reality” TV show, NERD GIRLS. “Smart, sexy, and tech-savvy? WE WANT YOU!”
Stuff like this always burns me up.
Firstly, they’re not really looking for nerd girls. They’re looking for wannabe reality TV stars willing to wear thick plastic cat eye glasses with their usual makeup and short skirts, “intentionally sex up their tech personas,” and embody a fetishized fantasy of what pop culture thinks a tech-savvy fantasy girl should look and be like. Here’s an example, right on the Nerd Girls website: http://www.nerdgirls.com/page/learn
Oh, here are some more: http://www.nerdgirls.com/profiles/
Secondly, they’re only further promoting, not undermining, hurtful and harmful stereotypes about real live girl nerds. The casting call quotes a Newsweek article from 2008 (which burned me up then, but I wasn’t writing a blog yet): “The Nerd Girls may not look like your stereotypical pocket-protector-loving misfits…”
In other words, Oh, but these girls aren’t like those girls. These girls are pretty. These girls are sexy. These girls are fun and flirty and know how to dress right–like real girls. Not like nerdy girls.
Those girls exist (we weren’t wearing pocket protectors anymore, though, even in the early ’90′s), only to the producers of NERD GIRLS, they’re not even good enough to be called nerds anymore.
I have news for the creators of NERD GIRLS. I was a nerd. I am a nerd. And it wasn’t fun. It wasn’t sexy. I wasn’t pretty; I still can’t stand the feel of makeup on my face. Things were not good for me, and nobody wanted me the way I was, for many, many years. But THAT is not what you want to make a TV show about, because that’s not what your audiences want to see and acknowledge.
“Why are Nerd Girls hot right now?” the website asks. But nerd girls are not hot right now. A romanticized, superficial and highly-sexualized fetishization of a fantasy girl pretending to be a nerd is hot right now. I’m willing to bet that things are just as hard as they’ve ever been, and completely not hot, for real girls who don’t fit pop-culture notions of what’s desirable. Who don’t fit in with their peers. Who love science, computers, or books but don’t also love makeup and fashion or look like a Seventeen model. Who can’t wear those shoes because they hurt. Who are odd, who are introverted, who don’t make friends easily. It’s lonely and hard to be smarter than everyone around you, or disinterested in stupid peer culture, or rejected because you’re not willing to pretend to be–not sexy. People don’t love you for it. They sure as fuck don’t want to put you on TV for it. The pretty plastic picture is easier.
NERD GIRLS claims a couple of different goals: To dismantle myths that boys are better at math and science than girls, and that “a female engineer is socially inept with no sense of style.” And, from the website (I’ll be done with this post soon, because I wanna barf every time I have to go look at that website again), to “celebrate smart-girl individuality that’s revolutionizing our future,” and to “encourage other girls to change their world through science, technology, engineering, and math, while embracing their feminine power.”
Okay, so just do that. It would be fantastic to see a series about real female scientists and their struggles in the school and work world, normal women with prestigious scientific careers. I would adore watching a series that really was going to deeply explore and debunk myths about girls’ mathematical ability and follow their journeys through school. But what you’re selling here is not individuality. It’s a pinup girl, just a different one than we’ve seen before, which has little to do with the real lives or feminine power of real girls. It’s nothing but pornography for people who would rather that real, awkward, smart girls were something prettier and more acceptable to mass tastes than they are.
This, NERD GIRLS, is hurting real girls. This is just holding up one more unrealistic, unattainable, beautiful package that they have to embrace or else be judged as not enough.
As an antidote, here a couple of my favorite real girl nerds:
Barbara McClintock, who discovered a type of gene called the transposon, which can jump between chromosomes. She was my historical scientist heartthrob in high school.
And Alice B. Sheldon, who wrote acclaimed, Nebula Award winning science fiction for many years under the pen name of James Tiptree, Jr. She was a Renaissance woman, also having worked as a watercolor artist, a satellite image reader during WWII, and run a poultry farm. There’s a fantastic biography of her out, James Tiptree, Jr.: The Double Life of Alice B. Sheldon, largely about her painful struggle to be herself in a world that wasn’t crazy about who she really was.