Directions: Read the following passages and then answer IN COMPLETE SENTENCES the questions which follow each passage. Scientists are preparing to boot up the world’s most powerful supercomputer, a machine with the power of 500,000 PCs and a thirst for electricity that will leave its owners with an annual bill of £25m. The computer, called Titan, will use graphics processors similar to those in PlayStation gaming consoles to tackle some of the toughest tasks in science. Until now most supercomputers have used normal processors souped-up versions of those in laptops and PCs. Decoding new flu strains—one of the most demanding jobs in computing—is one task that engineers at Oak Ridge National Laboratory in Tennessee might set Titan. The supercomputer could also design vaccines to stop the flu bugs before they can spread. Sumit Gupta, a senior engineer at Nvidia, the company that is making the Titan processors, said: "Computer simulations can explore lm different drug [vaccine] candidates within weeks or months." Titan will carry out 20,000 trillion calculations a second, about 4,000 trillion calculations a second faster than Sequoia, the world’s current fastest computer, which is used to simulate nuclear explosions. A typical PC carries out 40 billion calculations a second. "Oak Ridge is in the race to have the fastest supercomputer in the world," the laboratory said in a statement. Supercomputing has been one of the fastest and most revolutionary of technological trends. As with ordinary computers, its history goes back to the 1930s and 1940s when the first digital computers were built, with the first transistor-based machine being produced in 1956. The development of supercomputers owes much to the work of Seymour Cray, an electrical engineer who realized the potential of linking processors together to create much faster machines. Experts differ on which of his machines should be called the first supercomputer but Cray-1, built in 1976, is commonly cited. Back then its ability to perform 160m calculations a second was seen as revolutionary. Nowadays that machine would have a fraction of the computing power of a smartphone. "Computers like these have revolutionized science," said Paul Calleja, director of the high-performance computing centre at Cambridge University. "In the past, researchers devised theories and then they carried out experiments. What supercomputers do is offer us a third way—computer modelling. We can devise a theory about, say, the way atoms and molecules or materials might behave, and then build a computer model to see if it works." Such approaches are now standard throughout science and engineering. In the aviation industry, for example, where engineers once tested the effects of bird strikes on aircraft by throwing or firing a dead chicken into a jet turbine, they now have vast databases on the composition of chickens and their behaviour when they hit spinning turbines. Similarly, car designers use computer models to test how vehicles will crumple in a crash and what injuries the occupants might sustain. In the past, such tests could be conducted only by using real cars occupied by dummies or even dead bodies. Computer modelling has sharply cut the need for such testing. Titan will be made available to scientists in various fields. One programme will tackle climate change and how rising greenhouse gas emissions might affect different parts of the world. Another will study the way fuel burns in diesel engines spinning thousands of times a minute, to find ways of boosting efficiency. "These types of calculations require massive computing power," said Calleja, whose own supercomputer at Cambridge has been used to design America’s Cup sailing boats. "The pressure to build even more powerful machines is huge." Supercomputers are no longer the preserve of the military or academic establishments, however. Many high-street companies, from supermarkets to banks and insurance firms, own them. Tesco, for example, is investing in a £65m supercomputing system in Watford, Hertfordshire, to underpin its online retail and banking businesses. Such computing power, combined with data extracted from loyalty cards and other sources, means supermarkets can build models of consumer behaviour to predict what they will want to buy even before their customers know it. Walmart, the American owner of Asda, has been using a supercomputer for several years and has even combined it with weather forecasts to work out what products will be needed in stores when storms or other events arise. Researchers at Cambridge are now working on perhaps the most ambitious computing project of all—to build a machine 150 times faster than Titan to help search for planets capable of supporting life. The computer, capable of between 2m trillion and 3m trillion calculations a second, will be hooked up to the Square Kilometre Array, a giant radio telescope made up of thousands of radio dishes that is under construction across South Africa and Australia. Calleja said the supercomputer’s key task would be to collate and analyse all the data captured by each dish. "It is the most ambitious project we have ever attempted," he said. How are supercomputers connected with people’s daily life
Directions: In this section you will read several passages. Each one is followed by several questions about it. You are to choose ONE best answer, A. B. C. or D. to each question. The momentum towards open publishing looks unstoppable but more still needs to be done to make science truly accessible, says Stephen Curry. If you would like to read the latest research from my lab, be my guest. Our report on a protein from a mouse version of the winter vomiting virus has just been published in the journal PLoS One and is available online for free—to anyone. Contrast that with my first paper, published in 1990, which you could only have read if you had access to a university library with an expensive subscription to the journal Biochemistry. Back in 1990—before the world wide web—that was how scientific publishing was done. Today it is being transformed by open access publishers like the Public Library of Science. Rather than being funded by journal subscriptions, these publishers charge authors or their institutions the cost of publication and make their papers available for free online. Many scientists are passionate supporters of open access and want to see the old model swept away. They have launched a protest movement dubbed the Academic Spring and organised a high-profile boycott of journals published by Elsevier. And the tide appears to be turning in their favour. This week the Finch Report, commissioned by the U.K. government, recommended that research papers—especially those funded by the taxpayer—should be made freely available to anyone who wants to read them. Advocates of open access claim it has major advantages over the subscription model that has been around since academic journals were invented in the 17th century. They argue that science operates more effectively when findings can be accessed freely and immediately by scientists around the world. Better yet, it allows new results to be data-mined using powerful web-crawling technology that might spot connections between data—insights that no individual would be likely to make. But if open access is so clearly superior, why has it not swept all before it The model has been around for a decade but about nine-tenths of the approximately 2 million research papers that appear every year are still published behind a paywall. Part of the reason is scientists’ reluctance to abandon traditional journals and the established ranking among them. Not all journals are equal—they are graded by impact factor, which reflects the average number of times that the papers they publish are cited by others. Nature’s impact factor is 36, one of the highest going, whereas Biochemistry’s is around 3.2. Biochemistry is well regarded—many journals have lower factors—but a paper in Nature is still a much greater prize. Unfortunately, it is prized for the wrong reasons. Impact factors apply to journals as a whole, not individual papers or their authors. Despite this, scientists are still judged on publications in high-impact journals; funding and promotion often depend on it. Consequently few are willing to risk bucking the trend. This has allowed several publishers to resist calls to abandon the subscription model. Another reason for the slowness of the revolution is concern about quality. Unlike many traditional journals, PLoS One does not assess the significance of research during peer review; it simply publishes all papers judged to be technically sound. However, this concern proved unfounded. PLoS One now publishes more papers than any other life science journal and has an impact factor of 4.4. The world of scientific publishing is slowly changing and the hegemony of established journals is being challenged. Shaken by the competition, more of them are offering variants of open access. At the high end of the market, Nature is about to face competition from eLife, an open access journal to be launched later this year. Adding to the momentum, U.K. government research councils are increasingly insisting that the research they pay for be published in open access journals. The European Union is poised to do the same for the science it funds. In the U.S., a bill now before Congress would require all large federal funders to make papers freely available no later than six months after publication. It can be learned from the passage that open publishing ______.
A. is changing the world of scientific publishing rapidly
B. is challenging the traditional subscription model
C. is unable to develop since the publishers do not get any subscription
D. is-supported by most countries