List of top Questions asked in CAT

The Union Government’s present position vis-a-vis the upcoming United Nations conference on racial and related discrimination world-wide seems to be the following: discuss race please, not caste; caste is our very own and not at all as bad as you think. The gross hypocrisy of that position has been lucidly underscored by Kancha Ilaiah. Explicitly, the world community is to be cheated out of considering the matter on the technicality that caste is not, as a concept, tantamount to a racial category. Internally, however, allowing the issue to be put on agenda at the said conference would, we are patriotically admonished, damage the country’s image. Somehow, India’s virtual beliefs elbow out concrete actualities. Inverted representations, as we know, have often been deployed in human histories as balm for the forsaken — religion being the most persistent of such inversions. Yet, we would humbly submit that if globalising our markets is thought as good for the ’national’ pocket, globalising our social inequities might not be so bad for the mass of our people. After all, racism was as uniquely institutionalised in South Africa as caste discrimination has been within our society; why then can’t we permit the world community to express itself on the latter with a fraction of the zeal with which, through the years, we pronounced on the former?
As to the technicality about whether or not caste is admissible into an agenda about race (that the conference is also about ’related discriminations’ tends to be forgotten), a reputed sociologist has recently argued that where race is a ’biological’ category caste is a ’social’ one. Having earlier fiercely opposed implementation of the Mandal Commission Report, the said sociologist is at least to be complimented now for admitting, however tangentially, that caste discrimination is a reality, although, in his view, incompatible with racial discrimination. One would like quickly to offer the hypothesis that biology, in important ways that affect the lives of many millions, is in itself perhaps a social construction. But let us look at the matter in another way.
If it is agreed — as per the position today at which anthropological and allied scientific determinations rest — that the entire race of homo sapiens derived from an originary black African female (called ’Eve’), then one is hard put to understand how, one some subsequent ground, ontological distinctions are to be drawn either between races or castes. Let us also underline the distinction between the supposition that we are all god’s children and the rather more substantiated argument about our descent from ’Eve’, lest both positions are thought to be equally diversionary. It then stands to reason that all subsequent distinctions are, in modern parlance, ’constructed’ ones, and like all ideological constructions, attributable to changing equations between knowledge and power among human communities through contested histories here, there, and elsewhere. 
This line of thought receives, thankfully, extremely consequential buttress from the findings of the Human Genome project. Contrary to earlier (chiefly 19th-century colonial) persuasions on the subject of race, as well as, one might add, the somewhat infamous Jensen offerings in the 20th century from America, those finding deny genetic difference between ’races’. If anything, they suggest that environmental factors impinge on gene-function, as a dialectic seems to unfold between nature and culture. It would thus seem that ’biology’ as the constitution of pigmentation enters the picture first only as a part of that dialectic. Taken together, the originary mother stipulation and the Genome findings ought indeed to furnish ground for human equality across the board, as well as yield policy initiatives towards equitable material dispensations aimed at building a global order where, in Hegel’s stirring formulation, only the rational constitutes the right. Such, sadly, is not the case as everyday fresh arbitrary grounds for discrimination are constructed in the interests of sectional dominance.

Studies of the factors governing reading development in young children have achieved a remarkable degree of consensus over the past two decades. The consensus concerns the causal role of ’phonological skills in young children’s reading progress. Children who have good phonological skills, or good ’phonological awareness’ become good readers and good spellers. Children with poor phonological skills progress more poorly. In particular, those who have a specific phonological deficit are likely to be classified as dyslexic by the time that they are 9 or 10 years old.
Phonological skills in young children can be measured at a number of different levels. The term phonological awareness is a global one, and refers to a deficit in recognising smaller units of sound within spoken words. Development work has shown that this deficit can be at the level of syllables, of onsets and rimes, or phonemes. For example, a 4-year old child might have difficulty in recognising that a word like valentine has three syllables, suggesting a lack of syllabic awareness. A five-year-old might have difficulty in recognising that the odd word out in the set of words fan, cat, mat, hat, mat is fan. This task requires an awareness of the sub-syllabic units of the onset and the rime. The onset corresponds to any initial consonants in a syllable word, and the rime corresponds to the vowel and to any following consonants. Rimes correspond to rhyme in single-syllable words, and so the rime in fan differs from the rime in cat, hat and mat. In longer words, rime and rhyme may differ. The onsets in val:en:tine are /v/ and /t/, and the rimes correspond to the selling patterns ’al’, ’en’ and ’ine’.
A six-year-old might have difficulty in recognising that plea and pray begin with the same initial sound. This is a phonemic judgement. Although the initial phoneme /p/ is shared between the two words, in plea it is part of the onset ’pl’ and in pray it is part if the onset ’pr’. Until children can segment the onset (or the rime), such phonemic judgements are difficult for them to make. In fact, a recent survey of different developmental studies has shown that the different levels of phonological awareness appear to emerge sequentially. The awareness of syllables, onsets, and rimes appears to merge at around the ages of 3 and 4, long before most children go to school. The awareness of phonemes, on the other hand, usually emerges at around the age of 5 or 6, when children have been taught to read for about a year. An awareness of onsets and rimes thus appears to be a precursor of reading, whereas an awareness of phonemes at every serial position in a word only appears to develop as reading is taught. The onset-rime and phonemic levels of phonological structure, however, are not distinct. Many onsets in English are single phonemes, and so are some rimes (e.g. sea, go, zoo).
The early availability of onsets and rimes is supported by studies that have compared the development of phonological awareness of onsets, rimes, and phonemes in the same subjects using the same phonological awareness tasks. For example, a study by Treiman and Zudowski used a same/different judgement task based on the beginning or the end sounds of words. In the beginning sound task, the words either began with the same onset, as in plea and plank, or shared only the initial phoneme, as in plea and pray. In the end-sound task, the words either shared the entire rime, as in spit and wit, or shared only the final phoneme, as in rat and wit. Treiman and Zudowski showed that four- and five-year-old children found the onset-rime version of the same/different task significantly easier than the version based on phonemes. Only the six-year-olds, who had been learning to read for about a year, were able to perform both versions of the tasks with an equal level of success.

Billie Holiday died a few weeks ago. I have been unable until now to write about her, but since she will survive many who receive longer obituaries, a short delay in one small appreciation will not harm her or us. When she died we — the musicians, critics, all who were ever transfixed by the most heart-rending voice of the past generation — grieved bitterly. There was no reason to. Few people pursed self-destruction more whole-heartedly than she, and when the pursuit was at an end, at the age of 44, she had turned herself into a physical and artistic wreck. Some of us tried gallantly to pretend otherwise, taking comfort in the occasional moments when she still sounded like a ravaged echo of her greatness.
Others had not even the heart to see and listen any more. We preferred to stay home and, if old and lucky enough to own the incomparable records of her heyday from 1937 to 1946, many of which are not even available on British LP, to recreate those coarse-textured, sinuous, sensual and unbearable sad noises which gave her a sure corner of immortality. Her physical death called, if anything, for relief rather than sorrow. What sort of middle age would she have faced without the voice to earn money for her drinks and fixes, without the looks — and in her day she was hauntingly beautiful — to attract the men she needed, without business sense, without anything but the disinterested worship of ageing men who had heard and seen her in her glory?
And yet, irrational though it is, our grief expressed Billie Holiday’s art, that of a woman for whom one must be sorry. The great blues singers, to whom she may be justly compared, played their game from strength. Lionesses, though often wounded or at bay (did not Bessie Smith call herself ’a tiger, ready to jump’?), their tragic equivalents were Cleopatra and Phaedra; Holiday’s was an embittered Ophelia. She was the Puccini heroine among blues singers, or rather among jazz singers, for though she sang a cabaret version of the blues incomparably, her natural idiom was the pop song. Her unique achievement was to have twisted this into a genuine expression of the major passions by means of a total disregard of its sugary tunes, or indeed of any tune other than her own few delicately crying elongated notes, phrased like Bessie Smith or Louis Armstrong in sackcloth, sung in a thin, gritty, haunting voice whose natural mood was an un resigned and voluptuous welcome for the pains of love. Nobody has sung, or will sing, Bess’s songs from Porgy as she did. It was this combination of bitterness and physical submission, as of someone lying still while watching his legs being amputated, which gives such a blood-curdling quality to her Strange Fruit, the anti-lynching poem which she turned into an unforgettable art song. Suffering was her profession; but she did not accept it.
Little need be said about her horrifying life, which she described with emotional, though hardly with factual, truth in her autobiography Lady Sings the Blues. After an adolescence in which self-respect was measured by a girl’s insistence on picking up the coins thrown to her by clients with her hands, she was plainly beyond help. She did not lack it, for she had the f lair and scrupulous honesty of John Hammond to launch her, the best musicians of the 1930s to accompany her — notably Teddy Wilson, Frankie Newton and Lester Young — the boundless devotion of all serious connoisseurs, and much public success. It was too late to arrest a career of systematic embittered self-immolation. To be born with both beauty and self-respect in the Negro ghetto of Baltimore in 1915 was too much of a handicap, even without rape at the age of 10 and drug-addiction in her teens. But, while she destroyed herself, she sang, unmelodious, profound and heartbreaking. It is impossible not to weep for her, or not to hate the world which made her what she was

The narrative of Dersu Uzala is divided into two major sections, set in 1902, and 1907, that deal with separate expeditions which Arseniev conducts into the Ussuri region. In addition, a third time frame forms a prologue to the film. Each of the temporal frames has a different focus, and by shifting them Kurosawa is able to describe the encroachment of settlements upon the wilderness and the consequent erosion of Dersu’s way of life. As the film opens, that erosion has already begun. The first image is a long shot of a huge forest, the trees piled upon one another by the effects of the telephoto lens so that the landscape becomes an abstraction and appears like a huge curtain of green. A title informs us that the year is 1910. This is as late into the century as Kurosawa will go.
After this prologue, the events of the film will transpire even farther back in time and will be presented as Arseniev’s recollections. The character of Dersu Uzala is the heart of the film, his life the example that Kurosawa wishes to affirm. Yet the formal organization of the film works to contain, to close, to circumscribe that life by erecting a series of obstacles around it. The film itself is circular, opening and closing by Dersu’s grave, thus sealing off the character from the modern world to which Kurosawa once so desperately wanted to speak. The multiple time frames also work to maintain a separation between Dersu and the contemporary world. We must go back farther even than 1910 to discover who he was. But this narrative structure has yet another implication. It safeguards Dersu’s example, inoculates it from contamination with history, and protects it from contact with the industrialised, urban world. Time is organised by the narrative into a series of barriers, which enclose Dersu in a kind of vacuum chamber, protecting him from the social and historical dialectics that destroyed the other Kurosawa heroes. Within the film, Dersu does die, but the narrative structure attempts to immortalise him and his example, as Dersu passes from history into myth.
Wesee all this at work in the enormously evocative prologue. The camera tilts down to reveal felled trees littering the landscape and an abundance of construction. Roads and houses outline the settlement that is being built. Kurosawa cuts to a medium shot of Arseniev standing in the midst of the clearing, looking uncomfortable and disoriented. A man passing in a wagon asks him what he is doing, and the explorer says he is looking for a grave. The driver replies that no one has died here, the settlement is too recent. These words enunciate the temporal rupture that the film studies. It is the beginning of things (industrial society) and the end of things (the forest), the commencement of one world so young that no one has had time yet to die and the eclipse of another, in which Dersu had died. It is his grave for which the explorer searches. His passing symbolises the new order, the development that now surrounds Arseniev. The explorer says he buried his friend three years ago next to huge cedar and fir trees, but now they are all gone. The man on the wagon replies they were probably chopped down when the settlement was built, and he drives off. Arseniev walks to a barren, treeless spot next to a pile of bricks. As he moves, the camera tracks and pans to follow, revealing a line of freshly built houses and a woman hanging her laundry to dry. A distant train whistle is heard, and the sounds of construction in the clearing vie with the cries of birds and the rustle of wind in the trees. Arseniev pauses, looks around for the grave that once was, and murmurs desolately, ’Dersu’. The image now cuts farther into the past, to 1902, and the first section of the film commences, which describes Arseniev’s meeting with Dersu and their friendship.
Kurosawa defines the world of the film initially upon a void, a missing presence. The grave is gone, brushed aside by a world rushing into modernism, and now the hunter exists only in Arseniev’s memories. The hallucinatory dreams and visions of Dodeskaden are succeeded by nostalgic, melancholy ruminations. Yet by exploring these ruminations, the film celebrates the timelessness of Dersu’s wisdom. The first section of the film has two purposes: to describe the magnificence and in human vastness of nature and to delineate the code of ethics by which Dersu lives and which permits him to survive in these conditions. When Dersu first appears, the other soldiers treat him with condescension and laughter, but Arseniev watches him closely and does not share their derisive response. Unlike them, he is capable of immediately grasping Dersu’s extraordinary qualities. In camp, Kurosawa frames Arseniev by himself, sitting on the other side of the fire from his soldiers. While they sleep or joke among themselves, he writes in his diary and Kurosawa cuts in several point-of-view shots from his perspective of trees that appear animated and sinister as the fire light dances across their gnarled, leafless outlines. This reflective dimension, this sensitivity to the spirituality of nature, distinguishes him from the others and forms the basis of his receptivity to Dersu and their friendship. It makes him a fit pupil for the hunter.

Democracy rests on a tension between two different principles. There is, on the one hand, the principle of equality before the law, or, more generally, of equality, and, on the other, what may be described as the leadership principle. The first gives priority to rules and the second to persons. No matter how skilfully we contrive out schemes, there is a point beyond which the one principle cannot be promoted without some sacrifice of the other.
Alexis de Tocqueville, the great 19th-century writer on democracy, maintained that the age of democracy, whose birth he was witnessing, would also be the age of mediocrity, in saying this he was thinking primarily of a regime of equality governed by impersonal rules. Despite his strong attachment to democracy, he took great pains to point out what he believed to be its negative side: a dead level plane of achievement in practically every sphere of life. The age of democracy would, in his view, be an unheroic age; there would not be room in it for either heroes or hero-worshippers. 
But modern democracies have not been able to do without heroes: this too was foreseen, with much misgiving, by Tocqueville. Tocqueville viewed this with misgiving because he believed, rightly or wrongly, that unlike in aristocratic societies there was no proper place in a democracy for heroes and, hence, when they arose they would sooner or later turn into despots. Whether they require heroes or not, democracies certainly require leaders, and, in the contemporary age, breed them in great profusion; the problem is to know what to do with them.
In a world preoccupied with scientific rationality the advantages of a system based on an impersonal rule of law should be a recommendation with everybody. There is something orderly and predictable about such a system. When life is lived mainly in small, self-contained communities, men are able to take finer personal distinctions into account in dealing with their fellow men. They are unable to do this in a large and amorphous society, and organised living would be impossible here without a system of impersonal rules. Above all, such a system guarantees a kind of equality to the extent that everybody, no matter in what station of life, is bound by the same explicit, often written, rules and nobody is above them.
But a system governed solely by impersonal rules can at best ensure order and stability; it cannot create any shining vision of a future in which mere formal equality will be replaced by real equality and fellowship. A world governed by impersonal rules cannot easily change itself, or when it does, the change is so gradual as to make the basic and fundamental feature of society appear unchanges. For any kind of basic or fundamental change, a push is needed from within, a kind of individual initiative which will create new rules, new terms and conditions of life.
The issue of leadership thus acquires crucial significance in the context of change. If the modern age is preoccupied with scientific rationality, it is no less preoccupied with change. To accept what exists on its own terms is traditional, not modern, and it may be all very well to appreciate tradition in music, dance and drama, but for society as a whole the choice has already been made in favour of modernisation and development. Moreover, in some countries the gap between ideal and reality has become so great that the argument for development and change is now irresistible.
In these countries no argument for development has greater appeal or urgency than the one which shows development to be the condition for the mitigation, if not the elimination, of inequality. There is something contradictory about the very presence of large inequalities in a society which professes to be democratic. It does not take people too long to realise that democracy by itself can guarantee only formal equality; beyond this, it can only whet people’s appetite for real or substantive equality. From this arises their continued preoccupation with plans and schemes that will help to bridge the gap between the ideal of equality and the reality which is so contrary to it.
When pre-existing rules give no clear directions of change, leadership comes into its own. Every democracy invests its leadership with a measure of charisma, and expects from it a corresponding measure of energy and vitality. Now, the greater the urge for change in a society the stronger the appeal of a dynamic leadership in it. A dynamic leadership seeks to free itself from the constraints of existing rules: in a sense that is the test of its dynamism. In this process it may take a turn at which it ceases to regard itself as being bound by these rules, placing itself above them. There is always a tension between ’charisma’ and ’discipline’ in the case of a democratic leadership, and when this leadership puts forward revolutionary claims, the tension tends to be resolved at the expense of discipline.
Characteristically, the legitimacy of such a leadership rests on its claim to be able to abolish or at least substantially reduce the existing inequalities in society. From the argument that formal equality or equality before the law is but a limited good, it is often one short step to the argument that it is a hindrance or an obstacle to the establishment of real or substantive equality. The conflict between a ’progressive’ executive and a ’conservative’ judiciary is but one aspect of this larger problem. This conflict naturally acquires added piquancy when the executive is elected and the judiciary appointed.

In the modern scientific story, light was created not once but twice. The first time was in the Big Bang, when the universe began its existence as a glowing, expanding, fireball, which cooled off into darkness after a few million years. The second time was hundreds of millions of years later, when the cold material condensed into dense suggests under the influence of gravity, and ignited to become the first stars.
Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightenments the cosmic ‘Dark Age’. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organised themselves into galaxies — or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.
Now two independent groups of astronomers, one led by Robert Becker of the University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backwards enough in time) to observe the closing days of the Dark age.
The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago, if astronomers are to have any hope of unravelling them they must study objects that are at least 13 billion light years away. The best prospects are quasars, because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of Dark Age, even quasars are extremely rare and faint. 
Recently some members of Dr Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Keck telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965, when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.
The fog prolonged the period of darkness until the heat from the first stars and quasars had the chance to ionise the hydrogen (breaking it into its constituent parts, protons and electrons). Ionised hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the ‘Epoch of Re-ionisation’. Because the ultraviolet shadow is visible only in the most distant of the four quasars, Dr Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and one-seventh of its current size.