List of top Verbal Ability & Reading Comprehension (VARC) Questions

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice. . . .
Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed. . . .
In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. . . . Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath. . . .
The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype – the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear. . . .
Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. . . . All this complexity . . . points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively inform how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

The Indian government [has] announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absent-minded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. . . . The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war. . . .
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government.
Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power. 
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. . . . [M]any were convinced that their contribution would open the doors to India’s freedom. . . . The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. . . .
Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.

The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. . . . The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. . . .
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests. Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. . . . That’s not likely to lead to breakthroughs.
Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that. . . a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.
Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too. “If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.
Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.
“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.
The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride…as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.
More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.
The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.
When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.
Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests … where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.
To the debit side of the ledger must also be added the transactional costs of metrics: the expenditure of employee time by those tasked with compiling and processing the metrics in the first place – not to mention the time required to actually read them. . . .
Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? . . . [N]o country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better. . .
The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse . . . e-governance can be just as bad as any other governance when the real issue is people and their motivation.
For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.
In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s . . . report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. . . . As long as the system empowers providers over citizens, technology is irrelevant.
The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more . . . The key . . . is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.
A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment— led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government. . . .
The World Trade Organisation (WTO) was created in the early 1990s as a component of the Uruguay Round negotiation. However, it could have been negotiated as part of the Tokyo Round of the 1970s, since negotiation was an attempt at a ‘constitutional reform’ of the General Agreement on Tariffs and Trade (GATT). Or it could have been put off to the future, as the US government wanted. What factors led to the creation of the WTO in the early 1990s?
One factor was the pattern of multilateral bargaining that developed late in the Uruguay Round. Like all complex international agreements, the WTO was a product of a series of trade-offs between principal actors and groups. For the United States, which did not want a new organisation, the disputed settlement part of the WTO package achieved its longstanding goal of a more effective and more legal dispute settlement system. For the Europeans, who by the 1990s had come to view GATT dispute settlement less in political terms and more as a regime of legal obligations, the WTO package was acceptable as a means to discipline the resort to unilateral measures by the United States. Countries like Canada and other middle and smaller trading partners were attracted by the expansion of a rule-based system and by the symbolic value of a trade organization, both of which inherently support the weak against the strong. The developing countries were attracted due to the provisions banning unilateral measures. Finally, and perhaps most importantly, many countries at the Uruguay Round came to put a higher priority on the export gains than on the import losses that the negotiation would produce, and they came to associate the WTO and a rule-based system with those gains. This reasoning – replicated in many countries – was contained in U.S. Ambassador Kantor’s defence of the WTO, and it announced to a recognition that international trade and its benefits cannot be enjoyed unless trading nations accept the discipline of a negotiated rule-based environment.
A second factor in the creation of the WTO was pressure from lawyers and the legal process. The dispute settlement system of the WTO was seen as a victory of legalists but the matter went deeper than that. The GATT, and the WTO, are contract organizations based on rules, and it is inevitable that an organization creating a further rule will in turn be influenced by legal process. Robert Hudec has written of the “momentum of legal development”, but what is this precisely? Legal development can be defined as promotion of the technical legal values of consistency, clarity (or certainty) and effectiveness; these are values that those responsible for administering any legal system will seek to maximize. As it played out in the WTO, consistency meant integrating under one roof the whole lot of separate agreements signed under GATT auspices; clarity meant removing ambiguities about the powers of contracting parties to make certain decisions or to undertake waivers; and effectiveness meant eliminating exceptions arising out of grandfather rights and resolving defects in dispute settlement procedures and institutional provisions. Concern for these values is inherent in any rule-based system of co-operation, since without these values rules would be meaningless in the first place, therefore, create their own incentive for fulfilment.
The momentum of legal development has occurred in other institutions besides the GATT, most notably in the European Union (EU). Over the past two decades the European Court of Justice (ECJ) has consistently rendered decisions that have expanded incrementally the EU’s internal market, in which the doctrine of ‘mutual recognition’ handed down in Cassis de Dijon case in 1979 was a key turning point. The court is now widely recognized as a major player in European integration, even though arguably such a strong role was not originally envisaged in the Treaty of Rome, which initiated the current European Union. One means the Court used to expand integration was the ‘teleological method of interpretation’, whereby the actions of member states were evaluated against the accomplishment of the most elementary goals set forth in the Preamble to the (Rome) treaty. The teleological method represents an effort to keep current policies consistent with stated goals, and it’s analogous to the effort in GATT to keep contracting party trade practices consistent with stated rules. In both cases legal concerns and procedures are an independent force for further co-operation.
In the large part the WTO was an exercise in consolidation. In the context of a trade negotiation that created a near-revolutionary expansion of international trade rules, the formation of the WTO was a deeply conservative act needed to ensure that the benefits of the new rules would not be lost. The WTO was all about institutional structure and dispute settlement: these are the concerns of conservatives and not revolutionaries, that’s why lawyers and legalists took the lead on these issues. The WTO codified the GATT institutional practice that had developed by custom over three decades, and it incorporated a new dispute settlement system that was necessary to keep both old and new rules from becoming a sham. Both the international structure and the dispute settlement system were necessary to preserve and enhance the integrity of the multilateral trade regime that had been built incrementally from the 1940s to the 1990s.