List of top Verbal Ability & Reading Comprehension (VARC) Questions asked in CAT

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has
consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be inthe retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then
recommend the products and services that help achieve it. . . .
For a relatively new class of consumer-products start-ups, there’s another method entirely.
Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as
Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few
aesthetically pleasing and supposedly highly functional options, usually at mid-range prices.
They’re selling nice things, but maybe more importantly, they’re selling a confidence in those
things, and an ability to opt out of the stuff rat race. . . .
One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.
For start-ups that promise accessible simplicity, their very structure still might eventually push
them toward overwhelming variety. Most of these companies are based on hundreds of
millions of dollars of venture capital, the investors of which tend to expect a steep growth rate
that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.
Scientists recently discovered that Emperor Penguins—one of Antarctica’s most celebrated species—employ a particularly unusual technique for surviving the daily chill. As detailed in an article published today in the journal Biology Letters, the birds minimize heat loss by keeping the outer surface of their plumage below the temperature of the surrounding air. At the same time, the penguins’ thick plumage insulates their body and keeps it toasty. . . .
The researchers analyzed thermographic images . . . taken over roughly a month during June 2008. During that period, the average air temperature was 0.32 degrees Fahrenheit. At the same time, the majority of the plumage covering the penguins’ bodies was even colder: the surface of their warmest body part, their feet, was an average 1.76 degrees Fahrenheit, but the plumage on their heads, chests and backs were -1.84, -7.24 and -9.76 degrees Fahrenheit respectively. Overall, nearly the entire outer surface of the penguins’ bodies was below freezing at all times, except for their eyes and beaks. The scientists also used a computer simulation to determine how much heat was lost or gained from each part of the body—and discovered that by keeping their outer surface below air temperature, the birds might paradoxically be able to draw very slight amounts of heat from the air around them. The key to their trick is the difference between two different types of heat transfer: radiation and convection.
The penguins do lose internal body heat to the surrounding air through thermal radiation, just as our bodies do on a cold day. Because their bodies (but not surface plumage) are warmer than the surrounding air, heat gradually radiates outward over time, moving from a warmer material to a colder one. To maintain body temperature while losing heat, penguins, like all warm-blooded animals, rely on the metabolism of food. The penguins, though, have an additional strategy. Since their outer plumage is even colder than the air, the simulation showed that they might gain back a little of this heat through thermal convection—the transfer of heat via the movement of a fluid (in this case, the air). As the cold Antarctic air cycles around their bodies, slightly warmer air comes into contact with the plumage and donates minute amounts of heat back to the penguins, then cycles away at a slightly colder temperature.
Most of this heat, the researchers note, probably doesn’t make it all the way through the plumage and back to the penguins’ bodies, but it could make a slight difference. At the very least, the method by which a penguin’s plumage wicks heat from the bitterly cold air that surrounds it helps to cancel out some of the heat that’s radiating from its interior. And given the Emperors’ unusually demanding breeding cycle, every bit of warmth counts. . . . Since [penguins trek as far as 75 miles to the coast to breed and male penguins] don’t eat anything during [the incubation period of 64 days], conserving calories by giving up as little heat as possible is absolutely crucial.
"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the
ideology of what the Victorian socialist William Morris called the "anti-scrape", or an
anticapitalist conservationism (not conservatism) that solaced itself with the vision of a
preindustrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .
In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying
fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the
radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk
historians and revivalists have led to music that is commonly regarded as parochial and
conservative. And yet – as newspaper columns periodically rejoice – folk is hip again,
influencing artists, clothing and furniture designers, celebrated at music festivals, awards
ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic",
containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .
[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant
transformation, a living example of an art form in a perpetual state of renewal. "One man sings
a song, and then others sing it after him, changing what they do not like" is the most concise
summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn
falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms.
Electrification, however, comes in many forms. For the early-20th-century composers such as
Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism,
angular modernism and the body blow of the first world war, as well as input from the
rediscovered folk tradition itself.
For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their
younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical
freedom of Dylan and the unchained melodies of psychedelia created the conditions for
folkrock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is longlasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terrapatria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .
Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.
Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realized. 
The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . .
Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent. 
The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.
 The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . . Others contend that decentralization begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism." 
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online. 
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful. 
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes. 
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .
When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . . 
As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.
And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental? 
For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . . 
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism —what early dependency theorists called the ‘development of underdevelopment’.
The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.
Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .
In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .
Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . . [T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .
Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama. . . .
By and large the big things in the solar system — planets and moons — are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn's rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn's ring system which Cassini has shown to be more than 90% water ice out in such a mist is like leaving laundry hanging on a line downwind from a smokestack; it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour... Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini's measurements of the density of comet dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them; that is, they are somewhere between 200 million and 70 million years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons — Rhea, Dione, Tethys, Enceladus and Mimas. . . .

Dr. Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened — and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later. 

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. . . . “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.” . . .
Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But. . . Bradshaw and several colleagues argue. . . that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture. . . .
Elephants, when left to their own devices, are profoundly social creatures. . . . Young elephants are raised within an extended, multitiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults. . . .
This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. . . . As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw . . . "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”
What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers. . . weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyperaggression. . . .
[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. . . . Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”
The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all dwvg o more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.
Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.
As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .
Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .
So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress. We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions.
There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated . . . that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice. . . .
Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed. . . .
In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. . . . Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath. . . .
The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype – the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear. . . .
Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. . . . All this complexity . . . points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively inform how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

The Indian government [has] announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absent-minded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. . . . The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war. . . .
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government.
Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power. 
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. . . . [M]any were convinced that their contribution would open the doors to India’s freedom. . . . The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. . . .
Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.

The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. . . . The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. . . .
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests. Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. . . . That’s not likely to lead to breakthroughs.
Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that. . . a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.
Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too. “If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.
Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.
“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.
The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride…as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.
More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.
The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.
When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.
Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests … where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.
To the debit side of the ledger must also be added the transactional costs of metrics: the expenditure of employee time by those tasked with compiling and processing the metrics in the first place – not to mention the time required to actually read them. . . .
Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? . . . [N]o country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better. . .
The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse . . . e-governance can be just as bad as any other governance when the real issue is people and their motivation.
For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.
In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s . . . report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. . . . As long as the system empowers providers over citizens, technology is irrelevant.
The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more . . . The key . . . is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.
A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment— led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government. . . .