List of top Verbal Ability & Reading Comprehension (VARC) Questions asked in CAT

Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “blackbox” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. Itis not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioural psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.
The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.
The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.
When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.
Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate overwater. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .
Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.
Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.
Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre- scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.
The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled oversleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.
Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .
Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge –boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational and unjustifiable.
That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’ something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.
There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and the East would vanish—the self-identity of national culture would itself be threatened. . . . The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signalled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts greater cost, and is therefore a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .
[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”
“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle—many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing the shrimp. The longest that a cuttlefish waited was 130 seconds.
Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.
Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
We cannot travel outside our neighbourhood without passports. We must wear the same plainclothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.
Such a conclusion might be fortified by examining selectively the tradition which follows more on these points. This often portrays societies where. . .'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . .. The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.
Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . .harming others.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because itis connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratising way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership.
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicate that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profitmotivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.
Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.
But it was tea’s consumer-directed marketing by importers and retailers – and later by brands– that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .
Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and ecotourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.
I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.