In my book “Searches,” I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes up big tech’s accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews, and my ChatGPT dialogues. . . .
People often describe chatbots’ output as “bland” or “generic”– the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech– including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I’m not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn’t attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data– though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.” . . . OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are– a goal that is easier to accomplish if people see those products as trustworthy collaborators.
To solve this question, we need to identify the statement that does not affirm the disjunct between the claims about AI made by tech companies and what AI actually does based on the provided passage.
Based on the analysis, the correct answer is the first option as it does not directly refer to any disjunct but rather points out the absence of research or evidence in one specific claim regarding biases toward big tech.
The question asks us to identify which reason is NOT used by the author to compare AI-generated texts to "a beige office building." Let's analyze each option based on the provided passage:
The passage mentions that AI output is often described as "bland" or "generic," similar to a "beige office building," which aligns with this option. Therefore, it is a valid reason for the comparison.
The passage notes that OpenAI aims for its products to "sound like a colleague" and to be "polite" and "engaging." This supports the comparison to "a beige office building" which is neutral and unassuming. Hence, this option is also a reason for the comparison.
The passage explicitly mentions that part of the strategy is to make users feel "safe" and to "foster trust and confidence." This aligns with the comparison, as a "beige office building" might symbolize neutrality and reliability. This is another valid reason for the comparison.
This point refers to the AI's response to criticism about biases. While the passage mentions this behavior, it does not link it to the comparison with "a beige office building." The comparison is about the tone and nature of the output, not the AI's response to criticism.
The correct answer is therefore the option that is not used for the comparison: AI tends to blame its training data when scrutinized for its biases.


When people who are talking don’t share the same culture, knowledge, values, and assumptions, mutual understanding can be especially difficult. Such understanding is possible through the negotiation of meaning. To negotiate meaning with someone, you have to become aware of and respect both the differences in your backgrounds and when these differences are important. You need enough diversity of cultural and personal experience to be aware that divergent world views exist and what they might be like. You also need the flexibility in world view, and a generous tolerance for mistakes, as well as a talent for finding the right metaphor to communicate the relevant parts of unshared experiences or to highlight the shared experiences while demphasizing the others. Metaphorical imagination is a crucial skill in creating rapport and in communicating the nature of unshared experience. This skill consists, in large measure, of the ability to bend your world view and adjust the way you categorize your experiences. Problems of mutual understanding are not exotic; they arise in all extended conversations where understanding is important.
When it really counts, meaning is almost never communicated according to the CONDUIT metaphor, that is, where one person transmits a fixed, clear proposition to another by means of expressions in a common language, where both parties have all the relevant common knowledge, assumptions, values, etc. When the chips are down, meaning is negotiated: you slowly figure out what you have in common, what it is safe to talk about, how you can communicate unshared experience or create a shared vision. With enough flexibility in bending your world view and with luck and charity, you may achieve some mutual understanding.
Communication theories based on the CONDUIT metaphor turn from the pathetic to the evil when they are applied indiscriminately on a large scale, say, in government surveillance or computerized files. There, what is most crucial for real understanding is almost never included, and it is assumed that the words in the file have meaning in themselves—disembodied, objective, understandable meaning. When a society lives by the CONDUITmetaphor on a large scale, misunderstanding, persecution, and much worse are the likely products.
Later, I realized that reviewing the history of nuclear physics served another purpose as well: It gave the lie to the naive belief that the physicists could have come together when nuclear fission was discovered (in Nazi Germany!) and agreed to keep the discovery a secret, thereby sparing humanity such a burden. No. Given the development of nuclear physics up to 1938, development that physicists throughout the world pursued in all innocence of any intention of finding the engine of a new weapon of mass destruction—only one of them, the remarkable Hungarian physicist Leo Szilard, took that possibility seriously—the discovery of nuclear fission was inevitable. To stop it, you would have had to stop physics. If German scientists hadn’t made the discovery when they did, French, American, Russian, Italian, or Danish scientists would have done so, almost certainly within days or weeks. They were all working at the same cutting edge, trying to understand the strange results of a simple experiment bombarding uranium with neutrons. Here was no Faustian bargain, as movie directors and other naifs still find it intellectually challenging to imagine. Here was no evil machinery that the noble scientists might hide from the problems and the generals. To the contrary, there was a high insight into how the world works, an energetic reaction, older than the earth, that science had finally devised the instruments and arrangements to coart forth. “Make it seem inevitable,” Louis Pasteur used to advise his students when they prepared to write up their discoveries. But it was. To wish that it might have been ignored or suppressed is barbarous. “Knowledge,” Niels Bohr once noted, “is itself the basis for civilization.” You cannot have the one without the other; the one depends upon the other. Nor can you have only benevolent knowledge; the scientific method doesn’t filter for benevolence. Knowledge has consequences, not always intended, not always comfortable, but always welcome. The earth revolves around the sun, not the sun around the earth. “It is a profound and necessary truth,” Robert Oppenheimer would say, “that the deep things in science are not found because they are useful; they are found because it was possible to find them.”
...Bohr proposed once that the goal of science is not universal truth. Rather, he argued, the modest but relentless goal of science is “the gradual removal of prejudices.” The discovery that the earth revolves around the sun has gradually removed the prejudice that the earth is the center of the universe. The discovery of microbes is gradually removing the prejudice that disease is a punishment from God. The discovery of evolution is gradually removing the prejudice that Homo sapiens is a separate and special creation.
For any natural number $k$, let $a_k = 3^k$. The smallest natural number $m$ for which \[ (a_1)^1 \times (a_2)^2 \times \dots \times (a_{20})^{20} \;<\; a_{21} \times a_{22} \times \dots \times a_{20+m} \] is: