Narcissus Bows, but Only to See Himself

James Di Palma-Grisi, Columnist

After reading a few articles about Gen Y and the workplace, I have come to the conclusion that the system of beliefs Gen Y seems to have is reminiscent of a single factor: elevated expectations. Like a banker in heat, the entire Generation seems to believe itself important, perhaps to a narcissistic degree, and capable of enacting dramatic change on the workforce.

An NPR article entitled “It’s up to you, New York (And Job Growth)” tells the story of Tsivia Finman, a 24-year-old college graduate with a life-threatening illness and without health insurance, looking for a job in New York. In January 2009, Finman moved to New York with two degrees, two grand and one plan: live in New York City looking for a job until there is only enough money left to buy a one-way flight back to Michigan.

“You always think, ‘I’ll find that job, I’ll find that job,’ which is why I didn’t apply for Medicaid earlier,” she explains. “It’s like something is gonna come along, I’ll have that work, but you can not do what I’m doing. It’s playing with fire, and it’s scary. It’s scary.”

As we learn from the article, Finman eventually applies for Medicaid, accepting the fact that her own health is more important than her fierce desire to believe things will turn out well for her. Medicaid seems to be a denouement of sorts, her admission that the world will not bend to her call and that she must rely on others for her own safety and health. But, crucially, she shows no signs of remorse; she knows full well that you “can not do what [she’s] doing,” and yet she continues to play with fire.

Call me crazy, but if I knew something wasn’t working, I would return well in advance of my money running out completely. I may sound like a miniature Andy Rooney, but Finman’s situation strikes me with a cyclical quality. We’ve heard the mythic American Dream stories of ages past, of foreigners coming to America believing that the streets are paved with gold only to immigrate to a country without mobility. We’ve heard these stories, and we know their falsities. But it seems that we, for all our talk of breaking the mold and improving everything, are falling into the same traps as our forebears, and are every bit as unaware.

Take Barack Obama for instance. We all thought he would bring a generational uprising against the traditional moralities surrounding Washington and business in the United States. We believed he would end wars, change the way the country saw itself and the way it was perceived abroad. What does that idealism remind you of? The student groups of the 1930s, 1960s and 1980s, perhaps? It is possible that someone could trace that attitude back to the revolutions of early modern Europe or the American Revolution. Of course, Obama—with Congress—passed several pieces of landmark legislation, but the shining future that was promised (I don’t care how many “lowerings of expectation” there were, the promises rang visionary for many listening) never materialized. But then, it never does. Nor should it be expected for the workplace.

And yet, there is Tsivia Finman (and probably thousands, maybe millions, more like her), bringing us, however abruptly, to narcissism. In the 1950s, 12 percent of teenagers queried thought of themselves as important. By the late 1980s, that number had risen to 80 percent. There is so much talk of Generation Y’s (and Obama’s) pragmatic idealism, an ideology allowing us simultaneously to work for the ideal and to achieve the best somehow rationally. Perhaps narcissism really ought to be added to the equation.

Generation Y Voters Focus on Policy, not Party

Matthew McDermott, Columnist

Today’s youth, more than any generation prior, are disaffiliated with either party in our current two-party political system. A rather cliché and uneventful statement—I know—but one that has real impact on the future of our current political system. While working in the 2008 campaign, I was fascinated by voter enthusiasm and the large turnout, particularly amongst the 18-29 year old generation. Had the Democratic Party changed politics forever in this country? To realists: no, not really. But for the first time in decades, youth turnout exceeded turnout for those 65 and older (18 percent to 16 percent), and voted 66 percent for President Barack Obama. Most astounding though, and a trend that has only grown in the last two years, is that younger voters are not showing up at the polls to vote for the Democratic Party. They were showing up to vote for the man. For the first time, a subset of voters is putting policy before party—an incredible feat in today’s political system.

It has been conventional wisdom throughout the generations that America’s young people perform the role of an anti-establishment, oppositional force in our political system. Youth are easily associated with spurts of grassroots activism, often using extreme and sometimes futile measures to promote causes overlooked by the majority in power.

In my many political musings I’ve often wondered about the effect of most grassroots-style campaigns that initiate change from outside of the political system. Rather, would it be more effective to have an advocate within the political system that fights for the causes of the grassroots? To further their cause successfully, should America’s young people voice their dissent within the political system instead of protesting on the fringe?

One young person that shares this mindset is Edwin Pacheco, the 28 year-old newly-inducted Chairman of the Rhode Island Democratic Party.

As a brief background (since to my dismay, many of you don’t follow the political happenings of this darling state), Ed was first elected to public office while still a teenager. In 2001, he was elected to the Burriville School Committee.  There he provided what I’ve long felt was missing on school committee’s across the country: a strong voice to represent those that are most affected by the Committee’s actions, students. He quickly rose to become Chair of the committee, and a year later successfully won a seat in the Rhode Island House of Representatives—becoming one of the youngest legislators in the history of the state. As a State Representative, Pacheco has been a strong advocate for youth issues and their ability to participate in the political process. He introduced (and successfully overrode the Governor’s veto to pass into law) voter pre-registration legislation allowing 16 and 17 year-old high school students pre-register to vote before they turn 18. Legislation was also enacted that gave high school students the ability to work as poll workers on Election Day.

And just this past year, fortifying his role in the Democratic Party, Ed became the Chairman of the state party, providing a fresh image to what had become an ancient political machine.

On merit, his rise through the political ranks is stunning in its own accord, but his legislative success and forwarding thinking for the Democratic Party has been just as stunning, if not more so. As Chairman, Pacheco is taking politics local, visiting each city and town Democratic committee with his pledge to increase use of social media and engage activists in the political process. This sentiment is especially important for youth who have begun to define themselves by policy and less by party affiliation: Rick McAuliffe, a lobbyist, Democratic fundraiser, and Pacheco supporter, says constant contact with the grassroots is particularly important in an era of declining support for party loyalty. Party leaders, he says, can no longer count on reflexive support:

“The days of, ‘I’m a Democrat and that’s good enough,’ are gone.”

I touched on this sentiment in some of my first pieces for Early Risers: while party affiliation among youth voters still skews towards the Democratic Party, they have stronger ties to certain liberal, social and fiscal policies. It is the very reason we are seeing an emergence of more youth identifying as “progressives,” “libertarians,” etc. than as Democrats or Republicans.

The irony in this situation is that disaffiliation, and the efforts of Chairman Pacheco to reengage youth in the political party system, is the biggest threat to Democratic chances in Rhode Island’s gubernatorial race this November. After having a Republican Governor for most of the last 25 years, GOP support in the state (with one of the highest unemployment rates in the country) has tanked and Republicans have no chance in winning this November.  But the Democratic nominee, General Treasurer Frank Caprio, faces a bitter battle from former-Republican Senator, now Independent, Lincoln Chafee. The battle rests not for political moderates in the state but rather for the left-wing of the Democratic Party, which along with union support is defecting to Chafee.  In fact in the latest Rasmussen poll, Caprio is only besting Chafee 49-34 percent among the Democratic base. A majority of the Chafee vote stems from youth voters and liberal voters tired of supporting party candidates that don’t embrace progressive values.

There are real parallels between the candidacies of Lincoln Chafee, Charlie Christ, Michael Bloomberg, and Barack Obama.  These candidates, regardless of their respective electoral outcome, have been able to make their political races less about party and more about standing up for policies.

It remains to be seen as to whether Pacheco in his new role as Chairman can transition the political machine from party partisanship to advocacy for true democratic social and fiscal values.  Mr. Pacheco must realize, as should all party leaders across the country, that Generation Y is not beholden to any established political party and will remain an unreliable voting block until they see issues stand before any (D) or (R) come election time.

As an aside, this will be my last post from within our lovely borders. This weekend I’m hopping the Atlantic to begin my time at the London School of Economics. It certainly will  be amusing to watch the outcome.

I’m Sorry, I Thought This Was America! | Volume 2, Food

James Sasso, Associate Editor

Part I: Health

America, the country of abundance, the country of limitless land, the country with a chicken in every pot, has an unhealthy love affair with food. In fact, I would go further, and say that Americans have a clinical obsession with food. We developed a TV channel dedicated to the idea that people would tune in to watch, not even eat, food. We have movies about restaurants and chefs. There are reality shows about food in which we watch others eat and magazines dedicated to cooking. In such a culture one would think that the American food system held sway as the most sophisticated, sustainable, healthy, tasty and overall benevolent in the world, but it is far from so.

Instead of such a pristine agricultural world, America’s food systems (including meat, fish, dairy and agriculture) are among the most disgusting in the world. American farms cherish size and speed over the quality of the product, which leads away from a more natural method of farming.  To ensure size and speed, massive farms employ industrial business techniques where the animals, fruits and vegetables are treated as any other commodity—not one that enters the bodies of humans. Animals are treated to unnecessary hormones and antibiotics to bring them to market size faster, and they are not allowed to move so that they fatten in less time. Vegetables and fruits are manipulated to grow to greater sizes with genetic engineering, the use of chemical-laden fertilizers and by the use of dangerous pesticides.

Sure these techniques permit a greater yield of meat and produce, but at what cost? The animals in factory farms are disease ridden, even with the massive use of antibiotics, because they are living in an unnatural state. Factory farms cram—and I mean cram—as many animals as possible into tight spaces because it simply saves them cost. These businesses want to increase profit at whatever cost to the quality of the product, and they largely can get away with it because farms, even these obviously industrialized farms, are not subject to the same regulation as other polluting, hazardous industries because they are “agricultural.” I won’t get into the gory details (as they are quite scary), but these animals are tormented in the name of cheap meat that everybody can afford, thus increasing the profits of these megafarm companies.

As for produce, megafarms do not tend to the quality of their individual plants as a small farm would. Produce megafarms literally farm thousands upon thousands of acres and, therefore, need to use more indirect methods of “caring” for their product, namely through the use of fertilizers and pesticides. Normally the word “fertilizer” does not have a negative connotation, but when this farming aid is laced with nitrogen to give the vegetables veritable steroids, one can say that this food is not grown naturally and that this fertilizer is hardly a nurturing cultivator.

Generation Y has grown up in the evolving food culture in America. There certainly are positive aspects to the amount of attention that we give to food, such as an increased awareness of the former-mentioned atrocities, an evolving restaurant revolution where chefs are producing higher quality dishes (using more locally grown ingredients because the customers demand it) and a noticeable return to small, family-run farms. All in all, Generation Y is the most “foodie” generation. We crave not only food to fulfill the dietary needs of the day, but food that tastes delicious and that meets standards of quality.

From a foodie standpoint, let alone an animalist or environmentalist perspective, the modern megafarm system dangerously harms the quality of food. Food travels on average 1,500 to 2,500 miles before it reaches the table! That’s almost half the country! Food products, with the exception of a few, were not meant to travel that kind of distance. Surely refrigeration has aided the shelf life of food, but this does not change the fact that food producers literally have been forced (by their own methods of production) to evolve the food so that it can handle such large journeys. Produce has to come to maturity faster, which increases the amount of time before it rots. Invariably, this affects taste.

Have you ever eaten a strawberry during strawberry season from a local farmer? Does it not taste entirely different from a strawberry sold at Stop and Shop? It’s a fact; local foods grown in season and in a natural manner taste better. Don’t believe that there is a difference? Wait for apple season, and compare; eat an apple from an orchard, and then go eat one from the local megamart. Your life will be changed forever. In the same breath, factory-farmed meat tastes like leather when compared to naturally grown, properly raised, hormone-free, antibiotic-free, locally grown meat. I guarantee that once you eat a steak that has been treated correctly by a local farmer you will never be able to eat a steak from the supermarket, let alone McDonald’s, again.

Whether or not you agree with the environmental and social effects of mass produced meat and megafarmed vegetables, as a generation of people who genuinely care about the quality of their food, you can agree that our food system needs to change. We watch enough Food Network, we watch enough Top Chef, we read enough about food to know that taste is important. Food has become more than sustenance; it is an art form. And as with all art forms we care about, the end product should be the best it can be. When eating, both for pleasure and with a critical mouth, taste always trumps the other aesthetic qualities such as size or visual pleasure. Our current farming system does not provide our budding taste buds and evolving sense of food with a proper template to create the masterful works of art.

So, Generation Y, are we foodies or are we mere consumers? Do we care more about the quality of what we eat or how cheap it costs? I say we should embrace our “foodiness,” and by the growing popularity of the chef culture, this assumption must reflect some accuracy. We can no longer linger in this hypocrisy of food, treating it in one instance as a cheap consumer good and in another as the basis for great art.

Part II: Environment

As the generation on which the current environmental crisis will have the most affect, one would think that we would care about major causes of the food disaster in America. Alas, we would rather ignore problems in order to continue living in the comfort of our polluting ways. For once I am sorry that this is America. Food production should not incerase pollution, but while fossil fuels do contribute significantly to global warming, factory farms are not far behind in their disruption of the natural order of the world. And sadly, factory farming and modern megafarms contribute to the havoc we wreak on the environment in more ways than fossil fuels do.

Factory farming does its fair share of air polluting, contributing to 18 percent of total greenhouse emissions.  Factory farmed animals produce many more times waste than their natural brethren, and to deal with this waste, factory farms create massive pools filled with the liquefied feces (farms use lots of water to turn it into a liquidy mess).  These pools generally are uncovered, and the methane leaks freely into the air. Such large amounts of methane are considered to be a major player in global warming.

The feces from these “pools”  is used generally as fertilizer in large megafarms. Usually, this would be a good thing–manure makes good fertilizer, and in many cases this holds true even for factory farmed animals–but factory farms greatly increase the chances of disease in the animal, which means disease in the waste of the animal.  When diseased waste is sprayed onto vegetables as fertilizer, the plants, too, become diseased, leading to outbreaks of E. coli or salmonella in vegetables. We’ve seen these breakouts in spinach and tomatoes in increasing numbers over recent years.

Besides using tainted fertilizers, megafarms employ numerous questionably safe growing methods in order to ensure that their plants come to unnaturally large size in unnaturally little time, namely, pesticides and nitrogen. These chemical additives pollute the soil and seep into the drinking water of America. Caused directly from the amount of nitrogen in fertilizers used in the Midwest, the Mississippi River actually has a “dead zone,” where oxygen eating bacteria have taken over and nothing else can survive. Pesticides, at the same time, kill the honey bees that are necessary for the ecosystem to prosper.

Furthermore, to increase profits, companies have developed monocultured plants that all react to stimulants such as fertilizers in the same fashion. These uniform, abnormally large and quickly maturing plants lose their resistance to insect invaders who can adapt to the poisonous pesticides. The megafarms are forced to use continuously more chemicals to ward off the increasingly resistant crop-eaters. For the most part, these monoculture plants are the grains and corn fed to livestock in this country, increasing the amount of poison that enters the bodies of the animals we eat, which in turn increases the amount of chemicals in our diet.  At the same time, increased herbicides and pesticides used to protect the incredibly vulnerable–albeit incredibly profitable–plants leads to an amplified presence of these chemicals in the water and aquifer systems.

I thought our generation would understand the problems of environmental damage. Have we not seen the direct effects of pesticides and herbicides on animals like birds? Have we not been  educated rigorously to understand the dangers of polluting our drinking supply? These massive farms and breeding prisons do nothing beneficial for the environment. They skirt regulation by claiming to be agricultural instead of industrial, and they induce toxic waste into the environment many more times than any nuclear power plant! Our food system is being polluted literally from the inside out! I thought America was a modern, progressing country, one that had left the practice of using dangerous chemicals in products we eat. I guess I was wrong.

Follow Up on Youth Unemployment: Minimize Government Intervention

Eric Waters, Columnist

Shortly after writing my previous article on youth unemployment, it became clear that simply pointing out a problem was not enough. We must ask another series of questions in order to try and solve this catastrophe. What are the underlying problems causing the resulting symptom of youth unemployment? Where, when and how should the government become involved? And where the government must intervene, what policies can be put into place to maximize economic growth and hiring? Of course, with such a dynamic economy and huge government oversight, we only will be scratching the surface, but the fundamental causes of the economic depression of youth raised here will provide a foundation for deeper examination.

The easiest place to point fingers is the current state of the economy. Without starting a political argument as to why the economy is as bad as it is, let’s look how it has affected youth unemployment. Naturally, in a downturn of this magnitude businesses are hesitant to hire. The youth is affected disproportionately because we tend to be the least skilled in the workforce. Unfortunately, we are the first to be laid off and the last to be hired. Another side effect of the downturn in the economy is the delay of retirement for millions of baby boomers. I remember hearing as a senior in high school, and subsequently in college, that my generation would be put onto a fast track in the corporate world due to the end of the baby boomers’ working lives. Well, if they are not retiring, the businesses are not looking to replace them. So, how can we best get wheels of the economy turning again?

The fastest, easiest way to do so is the reduction of taxes. The Bush tax cuts are set to expire at the end of this year if the Obama administration does not act to extend them. One of the best ways to continue to watch the economy tighten instead of blossom is to decrease the amount of money in people’s pockets by raising taxes. Otherwise, let’s do the right thing and keep money in the pockets of the people that earned it. This will get small businesses hiring and people spending; both of which will help get more youth into the workplace.

Another problem that may not be as obvious is the minimum wage. Digging a little deeper, we find that arbitrarily raising the minimum wage is more harmful than beneficial. Halie Anderson wrote in an article in Journal News, “Between 2007 and 2009, the hikes in minimum wage reduced employment for teens aged 16 to 19 by 12.4 percent, which translates to about 98,000 fewer teens in the work force.”

I don’t know about you, but I would prefer to have almost 100,000 additional teenagers saving for college, saving for a car, or simply, just for making money. The article goes on to state: “Although some argue that the minimum wage should be higher to reduce poverty…they are not taking into consideration that most minimum wage earners aren’t poor.”

This should calm those of us who argue that the minimum wage must be raised to what is called a “living” wage. Again, it seems that the government has another field where it would be better off to let the free market decide wages. If a ready, willing and able person will perform a job for five or six dollars an hour, why would the government prevent him from doing so?

As you can tell, I generally prefer less government intervention when it comes to youth employment. Does anyone feel otherwise? I would love to receive comments on this topic in agreement or disagreement. And to those who disagree, let’s find some common ground. Ultimately, we are all looking to improve the lives and well being of generation Y.

“Emerging Adulthood” or Emerging Hypothesis?

James Di Palma-Grisi, Columnist

If there is one thing psychologists do regularly, it is disagree with one another quite publicly about their pet theories (many of which, of course, later make it into the mainstream). The “emerging adulthood” hypothesis seems to be one of those contentious claims.

On August 18th, the New York Times Magazine published a widely read and widely responded-to article, “What Is It About 20-Somethings?” in which Robin Marantz Henig presents psychologist Jeffrey Jensen Arnett’s movement to view the 20s as a distinct life stage.

Just as adolescence has its particular psychological profile, Arnett says, so does emerging adulthood: identity exploration, instability, self-focus, feeling in-between and a rather poetic characteristic he calls “a sense of possibilities.” A few of these, especially identity exploration, are part of adolescence too, but they take on new depth and urgency in the 20s. The stakes are higher when people are approaching the age when options tend to close off and lifelong commitments must be made. Arnett calls it “the age 30 deadline,” Robin Marantz Henig reports in the New York Times Magazine.

Adolescence was recognized as a developmental stage in itself only recently in psychological history. Before such recognition, it was assumed that children were transformed instantly into adults. In one of my psychology lectures, the professor said, “Children are not little adults” and pointed out that neither are teenagers.

This, of course, is based in decades of biological science, during which gender-specific hormones propagate throughout the body, and key brain structures develop and mature. The prefrontal cortex, the brain area associated with decision-making and weighing long-term consequences among other functions, continues to mature until an indeterminate age.

With that in mind, adolescents certainly are doing both—growing from experience and developing naturally—whereas it is unclear what the 20-to-30 year-olds are doing. “Never trust anyone over 30” may be a catchy maxim, but those under 30 certainly are not in the same identical chock boxes either. The question is whether those closer to the dreaded marker are identical to those 10 or 20 years past it. If the hormones unleashed with adolescence change the mind and body of the adolescent, must there not be a similar change—similar in magnitude—for the designation of a new stage?

I find it strange that the stages of adolescence—the physical stages—are so well defined, whereas the perpetual development of the prefrontal cortex constitutes a novel stage in itself. It may be that such a continuous change may constitute its very own stage: adulthood. After the maturation of the emotional brain, which manifests in the many outbursts of adolescence, those outbursts subside a few years later, age depending, and they do not manifest after that unless under “normal” circumstances in which anyone would show emotion.

That transition, between adolescence and the next phase, is marked and observable. The notion of a somewhat arbitrary cutoff between 20 and 30, say, seems just that—an arbitrary cutoff. Would it be presumptuous to assume that an otherwise healthy cohort of overachieving 20-somethings would be their own sample, and their emotional health and occupational satisfaction reliable indicators? At that point, and at the risk of sounding Medieval, it seems that the difference between those in the phase and those outside it is the degree to which those outside have made up their minds about what to do. And if that is the case, the stage is not a stage at all, but rather a constant, apparently unyielding development of the prefrontal cortex throughout observed adulthood.

Also, if “emerging adulthood” was indeed a phase, it always should have been observable the way adolescence always has been. The characteristics of adolescence are more or less universally observable, whereas the characteristics of “emerging adulthood” are dependent on how many questions and how many jobs a person holds within 10 years, it seems. Of course, I am not a psychologist, but I believe we can be skeptical of these rather simplistic judgments about our cohort made by outside observers using highly questionable metrics to establish their claims.

Arnett’s push for “emerging adulthood” seems to use the same reason that a poll based on a short questionnaire would declare, “Millenials eager to change world!” Simply holding a large number of jobs doesn’t mean automatically that someone is indecisive or questioning. It may be reflective of an awful economy (which Henig mentions in her article) or multiple interests, rather than a vague sense of possibility. For instance, say I am interested in materials science and constitutional law, to take two polar opposites—would it be all that unreasonable for me to hold a different job each summer, followed by two internships, one in each field? Does that justify creating a new stage of psychological development?

Then, the inevitable question—what does this mean for us? In my analysis, it means we are a generation unafraid to question our own interests, but not relegated only to questioning those interests. It very well may be that the period of questioning is a more general period of exploration, with the questions being asked and the opportunities being taken; the two are not mutually exclusive. The typical period of exploration and questioning may be just a period, not a discrete psychological phase, no matter how tempting the prospect of institutionalizing and codifying the actions of the young.

As a generation unafraid to question our own interests, we can avoid the rueful careerists’ lament: that they chose the wrong field or simply wish they had done something else. We also can avoid the less pernicious, but perhaps more directly annoying, result of wishing to have explored something else—“I wish I had given it a shot”—and in so doing put our future frustrations to rest. In this regard, we can continue this “frustration saving” by banking our presumed primary interest for now in anticipation of having tried it for later periods of second-guessing. It would seem that the second-guessing now saves second-guessing later, when the options are all but permanently closed.

We should do the same with our politics; Exploring that socialistic or libertarian streak to see how it fits, or try on the jacket of the more plausible enemy. Democrats should become temporary Republicans to see whether, if their positions were true, their own beliefs would change. Then, and only then, should we make a judgment of our own beliefs.

Assume for a moment that rent control really did cause housing shortages, and work backward. Why could this be? Could the market explanation that lower prices cause people who can otherwise afford only to live together to live separately? If so, are there less open apartments for the people who otherwise would pool their funds? Then, ask yourself if that is plausible. It is no different than pursuing parallel protocareers in medicine and Kazakh culture, trying each on to see how agreeable it is to your present configuration.

If flexibility truly is our forte, then we are uniquely positioned—and perhaps more favorably positioned than the polls may present us to be—to solve the complicated problems for which ideology constantly fails to provide answers. If we can behave as the “policy mandarins” of the Obama Administration, regardless of your existing frame of reference, and examine the data as it is instead of how it would be if it was more stylish and immediately comprehensible, we may find ourselves inheritors of an ugly mess that we can, with enough sustained effort, solve and, in so doing, improve the world, as we apparently are so eager to do.

An Interest Group of Its Own

James Di Palma-Grisi, Columnist

photo courtesy of Wall Street Journal

The Pew Research Center polled Americans in late August on the response to the Ground Zero Mosque, a YMCA-like structure planned for construction near the World Trade Center site. Interestingly, the poll found that 51 percent of their respondents agreed that the mosque should be built somewhere else, while 62 percent of those same respondents agreed that Muslims should have the same rights as other Americans when building religious centers in local communities. The other option for those 62 percent was “local communities should be able to prohibit construction of mosques if they do not want them.”

Such strange behavior.

While the statistics reported here’s sources did not provide the demographics of respondents, I cannot help but assume that Millenials did not compile most of the poll’s respondents’ make up. Generation Y tends to agree with itself regardless of party or social beliefs on issues like the environment and the Iraq War. Before Obama’s election, a Peanut Labs poll of people aged 18-29 years old found that 89.5 percent of Generation Y support alternative fuels, and 80.5 percent, in accordance, support climate regulation. Following this rather liberal trend, 75.6 percent want to end the Iraq War, and 68.6 percent want to end it immediately and bring the troops home.

While these statistics admittedly are outdated, the broad agreement manifests itself—not only among Gen Y’ers themselves, but between issues as well. Considering how divided the rest of the country is on these issues, there seems to be a solid consistency characteristic of Generation Y.

If the numbers are to be believed, Gen Y has its opinions in order and is unified on some national concerns. Given such strong, single-minded stances on other issues, we can only assume there is some convocational factor that informs Gen Y beyond media and partisanship. I would venture to guess that since the generation appears anti-dogmatic and self-centered—according to a USA Today poll, 81% say “to be rich” is the goal of the generation; 51% say “getting famous;” 30% say “to help those who need it”—the generation will believe in a civil libertarian solution allowing the mosque.

This, of course, is conjecture awaiting confirmation or denial.

I’m Sorry, I Thought This Was America! | Volume I, Gay Marriage

James Sasso, Associate Editor

DISCLAIMER:

For those of you who are familiar with South Park (especially you religious watchers like me who can quote many of the show’s lines from its 14 seasons), the title of this editorial series, concerning the numerous hypocrisies of American politics, should be familiar. For those who don’t get the reference, don’t worry. Just take it as is; read, and you will understand.

Part I:  Legality

As America must know by now, Federal Judge Vaughn Walker recently overturned Proposition 8, which, prior to the ruling, made homosexual marriages illegal in the State of California. There are those who saw this as an affront to democracy: What gives a judge, not even a member of the Supreme Court, the right to disregard the wishes of a majority of citizens?  After all America is a democracy, right?

Wrong! America is actually in very few senses of the word a democracy. As every political science major and, hopefully, every high school student who has taken a Civics class (or something like it) knows, the American government was established in hybrid form; a Republic-Democracy. In this very clear, albeit often overlooked, distinction that we find one of the major reasons why Judge Walker had the right to overrule Proposition 8. In a democracy, the people rule directly. Every citizen technically holds equal political weight, which means that the majority always rules since there can be no one power in the minority opinion with sufficient political power to outvote or override the majority.

Obviously America does not operate as a pure democracy. In fact, when one looks back at documents from the founding of America, including the famous Federalist Papers, it becomes clear that the patriarchs of America all agreed upon limiting the tyranny of the majority. While they concurred that a government needed to be “for the people, by the people,” the founding fathers had the foresight to realize that majority opinions, in and of themselves, have the tendency to usurp the rights of, or ignore the well being of, those in the opposition. Since the founders desired to build a government based on Enlightenment principles of liberty and rights, they realized that multiple checks against the potential tyranny of the majority must be built into the American Experiment. And as such, the unique American system of government, with both its Republican and Democratic aspects, can neither operate with full power to the people or with representatives who have no need to answer to the public.

But it is precisely this system of shared powers, of checks and balances, that permits a judge to invalidate the wishes of the majority when it appropriates the right of a less powerful minority. In the case of Proposition 8, a coalition of people who thought they were defending the so-called “sacred” word, “marriage,” managed to get a majority of Californians to vote against allowing gays the same right to marry as straights. Where is the justification for taking away such a right? What wrong had the minority of homosexuals done to garner a limitation on their rights to life and liberty? Nothing, except differ from the majority.

Judge Walker’s decision held such power because he did not make his argument from a purely moral stance. In fact, he used the California State Constitution to point out that the Government of California has an obligation to provide all people with the same personal rights.  In instance after instance he disproved the arguments of the defense not simply by calling their stance immoral, but by using legality to show that homosexuals do indeed have the right to marry. In essence he found that legally, because marriage does not necessitate procreation and homosexuals have the same ability to raise children as heterosexuals, gay marriage and heterosexual marriage are no different.

This legal justification isn’t limited to California. A quick look at the American Constitution and its modern interpretation, one finds obvious reason to allow gay marriage nationally. The famous 14th Amendment (in its language) gives all citizens, in all states, the same basic rights to live as one best sees fit so long as it does not interfere with the life, liberty and property of another. All citizens have equal protection of these rights. The government, being secular, cannot impose moral or religious beliefs on its people without an accompanying purpose; Judge Walker could find no such purpose in Proposition 8.

In a sense, the Constitution provides people with a right to privacy (or, if you want to get fancy, the Constitution contains a penumbral rights that provide “zones of privacy”), and the government cannot interfere with the decisions of an individual unless it views those decisions as harmful. Enter the morality debate.

Part II: Morality

Those opponents of gay marriage argue that gay parents cannot be good parents, that their children will be ridiculed and that a child will suffer from growing up in such an “unnatural” environment. Where is the proof of any of this? In fact, I know kids with gay parents, and they seem to have turned out better than the rest of us. Plus, in today’s world, children are taught not to ridicule people for being different. Generations Y and Z are noticeably less likely than previous generations to taunt or tease a person for any sort of perceived difference, whether it be race, retardation, religion or sexual preference.

Furthermore, what exactly makes homosexual parents so unnatural? What about divorce? Is that natural? Is it natural to grow up with a single parent working three jobs who cannot be both present and a provider? Wouldn’t those children rather have more than one parent with a more stable life? I would argue that the epidemic of single mothers in the inner city is much more troubling than any perceived threat from homosexual parenting.

The modern world is changing, and the family structure is changing along with it. Mothers now work more often than they stay at home, and fathers are not always the breadwinners of a family. Look at my familial bracket; I have four parents. Four! Who says that’s natural? It certainly goes against the classical definition of child-rearing: the nuclear family. Yet, in all my unnatural rearing, I have never been ridiculed. And how about separation? In some estimates, the percentage of marriages that end in divorce has jumped to over 50; divorce has become so commonplace that people no longer see it as an invariably bad thing. Society changes, and its members evolve to adapt to that society.  And if society can change, surely classical definitions can change.

People argue that the sacred rite of marriage should exist only between a man and a woman, that anything else will drain the already weakened status of marriage. Marriage equality’s opponents believe that opening the doors to gay marriage will cripple the idea of marriage and “the family” so much that couples will have children increasingly out of wedlock (and raise them in tandem) without ever choosing to marry.

First of all, this seems unlikely. Today’s youth will not see the allowance of gay marriage as a blow to the “ideal of marriage;” instead, they will see it as proper because it fulfills the Constitution and the American ideal of equality. Second of all, maybe the type of situation in which one is reared does not matter, so long as both parents are stable, active, present and loving parties.

If marriage is meant to continue the human race, then it should mean nothing more than an ability to raise children well. In the modern world, homosexual couples have ample opportunity either to birth their own children (through in vitro fertilization or surrogate motherhood) or to adopt. In both cases, they continue the human race either by adding to its numbers or by cultivating a productive member of society.

So America the hypocritical, America the hypocrisy, where is equality for all? Where is the protection for minorities laid out in the Constitution?  Yes, homosexuals are different from the straight majority, but that should not matter. America has made long strides to rid itself of other inequalities through the Women’s Suffrage Movement and the Civil Rights Movement, among others. Homosexuals are definitively part of American society and, thus, definitively deserve the same rights as other Americans. After all, I thought this was America!

Why the Humanities Matter

Malik Neal, Columnist

In his novel, Hard Times, Charles Dickens introduces us to the character Thomas Gradgrind—a notorious headmaster, who in the zealous pursuit of practicality, sees education as a means only to prepare pupils for profitable enterprises. “A man of realities,” Mr. Gradgrind views the humanities with scorn, seeing history, philosophy, literature and related disciplines as “destructive nonsense” that has no bearing or relevance to individuals in contemporary society.

The spirit of Mr. Gradgrind is very much alive today, displayed in an increasing number of students who have decided not to indulge in the study of humanities because, in their view, it does not prepare them for a specific, profitable vocation. This is not entirely surprising. As the economy struggles, so, too, do the humanities at today’s universities and colleges. The humanities, however, illuminate the human condition. It enriches critical thinking, provides us with historical knowledge and perspective and instills within us a constant desire to inquire. Such qualities are not only invaluable for the individual but are prerequisites for any meaningful participation in a democratic society.

The “humanities” really constitute the sum total of human literacy, historical, cultural and artistic achievement on this planet. They are the essence of humanity. While every person obviously must have a job to live, that person lives within a context, and that context is our culture, which defines how we relate to one another.

One cannot understand society unless the elements of how that society came to be are appreciated. This appreciation, or, more precisely, knowledge, comes only through a study of literature, history, philosophy, religion and art. In order to function in a society, one must have at least some basic idea of how that society came into being in the first instance. As an old African proverb recites: “He who knows nothing of the past is condemned to remain a child forever.”

Moreover, the questions currently being debated in our country–immigration, religion, the role of government–are not entirely new. These are questions that thinkers have been answering and discussing for ages. The history and varied ways thinkers have approached these questions are an integral part of the humanities. Such historical knowledge and perspective is needed for any individual who seeks to be a leader in the 21st century.

In order to navigate around and through a society or culture, one must know its contours, and those contours are outlined by the society’s history, art, religious beliefs and philosophy. Knowing how to earn a living is only one small part of existing in a culture. It is essential, however, a livelihood tells us little about how to conduct one’s life in harmony with other people, avoiding conflict, unhappiness and despair. In short, knowledge of the “humanities” gives one more than just diversion and transitory entertainment; it gives one a sense of purpose.

Humans should not, as Mr. Gradgrind suggested, be mere machines of utility. We have a deeper meaning and purpose. As a biblical phrase suggest, “Man does not live by bread alone.” In fact, knowledge of the humanities is what separates a self-actualized individual from a machine.

Forget 2010, the 2012 White House Is a Democratic One

Matthew McDermott, Columnist

Following the common political ebb and flow in this country, perceptions affect electoral outcomes, but more importantly, these perceptions can change. Whether the issue may be war, education, abortion or the economy, elections are a soap box for voter discontent. It’s the reason single party power in our Federal Government is rare, and it’s the reason Midterm Elections are usually dismal for the majority. It’s the reason the Obama Administration marched into office with a massive electoral victory, and why 22 months later, the Democratic Party is fast approaching its greatest national loss in over a decade. But the very nature of our political system is the very reason Republicans should not expect even short-term successes after their November landslide.

Growing (and if I may say, rather trite) discussion within the political world has been the Democratic Party’s plummeting poll numbers amongst young voters. Bringing this chatter to the forefront was a recent New York Times’ article expressing that the youth vote this year is “up for grabs.” Polling this year by the Pew Research Center shows that Democrats hold a 20-point lead (57-37 percent) over the Republican Party among 18-29 year-olds. While still a massive party advantage, this is down from the 32-point lead (62-30 percent) just two years ago. Was November 2008 merely a blip amongst youth in what otherwise remains an evenly divided country? Doubtful.

Eric Waters wrote a refreshing piece last week on the staggering unemployment of youth in America today. While the national unemployment rate hovers just below 10 percent, among 20-24 year-olds unemployment increases to 15.1 and among teenagers triples to 26.1 percent. It’s only rational to expect these numbers to have a pronounced impact on the Midterm Election, as they have in previous elections. Honestly, I’d quote James Carville, if not for the groan I’d give myself. It’s conventional wisdom to repeat, but it wouldn’t have to be repeated so often if partisans kept in mind historical trends: the economy drives the polls, and when the country is facing economic hardship, the party in power always feel the brunt of criticism.

The outcome in eight weeks will have nothing to do with the President’s ability to resonate with the public and everything to do with the prolonged economic recession. While this administration has taken large strides on issues positively impacting our generation–from easing the student loan process to health care–these measures become irrelevant when students are faced with an inability to find a job. In polling done by Gallup during the last week of August, a whopping 99 percent of responders cited the economy as at least moderately important to their vote for Congress this year. Of that, 62 percent said the economy was “extremely important.” This was followed closely by “jobs,” extremely important to 60 percent of voters.

Republicans should learn from their own history. At this point in 1982, President Reagan had nearly identical approval ratings as President Obama, in both overall job approval and his handling of the economy. That same year Republicans lost 26 seats in the House, only to have Reagan win one of the largest electoral victories in history two years later. Bill Clinton, having a similar approval rating of 46 percent, was in a similar position. And furthermore, in an interesting note passed along at The Monkey Cage, using CBO’s projected real GDP growth for 2012, President Obama should expect a popular vote win margin in the next Presidential election of eight percent, give or take seven. In layman’s terms, that suggests an 85-90 percent chance of reelection in 2012, regardless of the expected electoral trouncing this November. Essentially the outcome in November will be the same outcome predicted by reasonable political scientists for over a year–dismal for the Democratic party–but suggestive of nothing more than voter economic discontent.

More concerning for the Republican Party—beyond the evolving demographic changes in this country—is the fact that this country continues to become more socially liberal over time. Obviously, young people are out of work, out of money, and furious–and rightfully so. But most shocking in the Pew Center poll is that Republicans continue to have abysmal success in recruiting these economically strapped voters. Only 24 percent of youth identify as Republican, an insignificant change from the 22 percent in 2008.  Their report notes:

While the Republican Party picked up support from Millennials during 2009, this age group continues to favor the Democratic Party more than do other generations. And the underlying political values of this new generation continue to be significantly more liberal than those of other generations on many measures.

Latest polling indicates that 29 percent of youth identify as liberal, while 28 percent identify as conservative. Comparatively, only 18 percent of Baby Boomers identify as liberal compared to 43 percent conservative.  Beyond social stances on gay marriage and immigration, the same Pew research highlights that youth today have an intrinsically greater support for government and lower support for aggressive defense policies.

So should Democrats prepare their box of tissues for November 2nd? Honestly, I’d take a large sedative Tuesday afternoon and sleep before the polls close. But what the youth must realize–and I think they do realize much more so than older generations–is that long-term Democratic and liberal roots continue to solidify among 18-29 year-olds. America’s young people have not become much more conservative in the last 24 months; they’ve simply turned their attention to their bank accounts and forgotten the ballot box. Current economic hardships eventually will wither, and the Republican Party will be left unable to measure up on non-economic policy platforms.

Non-politics, A Multidisciplinary Approach

Allison Boldt, Contributor

The recurring theme of this series is how our generation differs from past generations.  An obvious distinction drawn was our use of social media and other nontraditional venues of political participation. It makes sense that our political identity reflects the technology available to us. One could probably make the argument that even the sex appeal we crave has roots in our constant exposure to different forms of media, which lends itself to a fast-paced lifestyle.

But technology aside, our generation has another distinguishing characteristic that is not as widely discussed: we are statistically more likely than our parents to go to college or universities. According to Bureau of Labor Statistics, 70.1 percent of all 2009 US high school graduates enrolled in some form of post-secondary education: this up from 45.7 percent in 1959, when the BLS began collecting the data. Of those continuing on to college this year, about 60 percent will do so through a four-year institution.

On the surface, these statistics may seem a bit humdrum. But the impact of our college educations on our generational identity should not be underestimated. Regardless of what one chooses to study, or where, post-secondary education broadens our perspective as a generation by developing our critical thinking skills and forcing us to work with people we might not ordinarily choose to talk to, let alone work with.

Four-year institutions and other schools with “General Education” requirements are particularly influential in distinguishing our generations’ civic character. The various class requirements discuss a wide range of practical and historical problems and leave graduates with a multidisciplinary approach to thinking about these problems.

Maybe I’m projecting here, but my impression is that my college experience was largely typical. I went to a medium-sized public school and was required to take courses in science, health, history, literature and arts. My first year, I remember discussing Genetically Modified Organisms in biology class, religious tolerance in history class and racism in literature class. Although I was unaware of it, I was learning about and having dialogues around political issues in every single classwithout ever stepping foot in a political science classroom (that came later) or realizing my interest in “political” issues.

It’s true that some of the Baby Boomers or Generation X’ers probably had similar college experiences. It’s also true that there is still a very real privilege gap in this country that divides those who can and cannot afford college. Still, the sheer percentage of Generation Y’ers going to college distinguishes us; it makes us more likely to think about issues through many lenses, as we were instructed to do in our classes as undergrad students. For instance, we associate climate change not through a political lens, but rather from the perspectives of biology, anthropology and environmental studies. When we think about poverty, we think less about the welfare reform debate and instead incorporate information from the fields of sociology, psychology, economics and literature. Further, we are much more likely than our parents to study the highly interdisciplinary fields like Women’s Studies, Latino Studies, and LGBT Studies, many of which were still taking shape throughout the 1980s and 1990s.

Politically, this multidisciplinary perspective might contribute to the stereotype that we are an apathetic generation. This goes back to the dichotomy between political issues and the traditional political system, which is simply more static than we are. From our perspective, we do think about politics and talk about politics—just not in a way where we follow how many votes a certain bill needs to pass into statute, or who is ahead in primary X. For many of us, this political horse race seems to miss the point. We would rather talk directly about the political problem at hand and apply a multidisciplinary analysis toward a solution. Rather than exciting us, “politics” seems just to get in the way.