Saturday, August 15, 2009

Laptop

A laptop is a personal computer designed for mobile use and small enough to sit on one's lap. A laptop integrates most of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, and/or a pointing stick), speakers, and often including a battery, into a single small and light unit. The rechargeable battery (if present) is charged from an AC adapter and typically stores enough energy to run the laptop for two to three hours in its initial state, depending on the configuration and power management of the computer.
Laptops are usually shaped like a large notebook with thicknesses between 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10x8 inches (27x22cm, 13" display) to 15x11 inches (39x28cm, 17" display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg); older laptops were usually heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed. Modern tablet laptops have a complex joint between the keyboard housing and the display, permitting the display panel to swivel and then lay flat on the keyboard housing. They usually have a touchscreen display and some include handwriting recognition or graphics drawing capability.
Laptops were originally considered to be "a small niche market" and were thought suitable mostly for "specialized field applications" such as "the military, the Internal Revenue Service, accountants and sales representatives". But today, there are already more laptops than desktops in businesses, and laptops are becoming obligatory for student use and more popular for general use. In 2008 more laptops than desktops were sold in the US and it has been predicted that the same milestone will be reached in the worldwide market as soon as late 2009.

Components of Laptop

The basic components of laptops are similar in function to their desktop counterparts, but are miniaturized, adapted to mobile use, and designed for low power consumption. Because of the additional requirements, laptop components are usually of inferior performance compared to similarly priced desktop parts. Furthermore, the design bounds on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components.
The following list summarizes the differences and distinguishing features of laptop components in comparison to desktop personal computer parts:
  • Motherboard – Laptop motherboards are highly make and model specific, and do not conform to a desktop form factor. Unlike a desktop board that usually has several slots for expansion cards (3 to 7 are common), a board for a small, highly integrated laptop may have no expansion slots at all, with all the functionality implemented on the motherboard itself; the only expansion possible in this case is via an external port such as USB. Other boards may have one or more standard, such as ExpressCard, or proprietary expansion slots. Several other functions (storage controllers, networking, sound card and external ports) are implemented on the motherboard.
  • Central processing unit (CPU) – Laptop CPUs have advanced power-saving features and produce less heat than desktop processors, but are not as powerful. There is a wide range of CPUs designed for laptops available from Intel (Pentium M, Celeron M, Intel Core and Core 2 Duo), AMD (Athlon, Turion 64, and Sempron), VIA Technologies, Transmeta and others. On the non-x86 architectures, Motorola and IBM produced the chips for the former PowerPC-based Apple laptops (iBook and PowerBook). Some laptops have removable CPUs, although support by the motherboard may be restricted to the specific models. In other laptops the CPU is soldered on the motherboard and is non-replaceable.
  • Memory (RAM) – SO-DIMM memory modules that are usually found in laptops are about half the size of desktop DIMMs.[28] They may be accessible from the bottom of the laptop for ease of upgrading, or placed in locations not intended for user replacement such as between the keyboard and the motherboard. Currently, most midrange laptops are factory equipped with 3-4 GB of DDR2 RAM, while some higher end notebooks feature up to 8 GB of DDR3 memory. Netbooks however, are commonly equipped with only 1 GB of RAM to keep manufacturing costs low.
  • Expansion cards – A PC Card (formerly PCMCIA) or ExpressCard bay for expansion cards is often present on laptops to allow adding and removing functionality, even when the laptop is powered on. Some subsystems (such as Ethernet, Wi-Fi, or a cellular modem) can be implemented as replaceable internal expansion cards, usually accessible under an access cover on the bottom of the laptop. Two popular standards for such cards are MiniPCI and its successor, the PCI Express Mini.
  • Power supply – Laptops are typically powered by an internal rechargeable battery that is charged using an external power supply. The power supply can charge the battery and power the laptop simultaneously; when the battery is fully charged, the laptop continues to run on AC power. The charger adds about 400 grams (1 lb) to the overall "transport weight" of the notebook.
  • Battery – Current laptops utilize lithium ion batteries, with more recent models using the new lithium polymer technology. These two technologies have largely replaced the older nickel metal-hydride batteries. Typical battery life for standard laptops is two to five hours of light-duty use, but may drop to as little as one hour when doing power-intensive tasks. A battery's performance gradually decreases with time, leading to an eventual replacement in one to three years, depending on the charging and discharging pattern. This large-capacity main battery should not be confused with the much smaller battery nearly all computers use to run the real-time clock and to store the BIOS configuration in the CMOS memory when the computer is off. Lithium-ion batteries do not have a memory effect as older batteries may have. The memory effect happens when one does not use a battery to its fullest extent, then recharges the battery. New innovations in laptops and batteries have seen new possible matchings which can provide up to a full 24 hours of continued operation, assuming average power consumption levels. An example of this is the HP EliteBook 6930p when used with its ultra-capacity battery.
  • Video display controller – On standard laptops the video controller is usually integrated into the chipset. This tends to limit the use of laptops for gaming and entertainment, two fields which have constantly escalating hardware demands. Higher-end laptops and desktop replacements in particular often come with dedicated graphics processors on the motherboard or as an internal expansion card. These mobile graphics processors are comparable in performance to mainstream desktop graphic accelerator boards.
  • Display – Most modern laptops feature 12 inches (30 cm) or larger color active matrix displays based on a CCFL lamp with resolutions of 1280x800 (16:10) or 1366 x 768 (16:9) pixels and above. Many current models use screens with higher resolution than typical for desktop PCs (for example, the 1440×900 resolution of a 15". Newer laptops come with LED based screens offering a lesser power consumption and wider viewing angles. Macbook Pro[34] can be found on 19" widescreen desktop monitors).
  • Removable media drives – A DVD/CD reader/writer drive is typically standard. CD drives are becoming rare, while Blu-Ray is becoming more common on notebooks. Many ultraportables and netbooks either move the removable media drive into the docking station or exclude it altogether.
  • Internal storage – Laptop hard disks are physically smaller—2.5 inches (64 mm) or 1.8 inches (46 mm) —compared to desktop 3.5 inches (89 mm) drives. Some newer laptops (usually ultraportables) employ more expensive, but faster, lighter and power-efficient flash memory-based SSDs instead. Currently, 250 to 320 GB sizes are common for laptop hard disks (64 to 128 GB for SSDs).
  • Input – A pointing stick, touchpad or both are used to control the position of the cursor on the screen, and an integrated keyboard is used for typing. An external keyboard and/or mouse may be connected using USB or PS/2 (if present).
  • Ports – several USB ports, an external monitor port (VGA or DVI), audio in/out, and an Ethernet network port are found on most laptops. Less common are legacy ports such as a PS/2 keyboard/mouse port, serial port or a parallel port. S-video or composite video ports are more common on consumer-oriented notebooks. HDMI may be found on some higher-end notebooks.

Advantages of Laptop

Portability is usually the first feature mentioned in any comparison of laptops versus desktop PCs. Portability means that a laptop can be used in many places—not only at home and at the office, but also during commuting and flights, in coffee shops, in lecture halls and libraries, at clients' location or at a meeting room, etc. The portability feature offers several distinct advantages:
  • Getting more done – Using a laptop in places where a desktop PC can't be used, and at times that would otherwise be wasted. For example, an office worker managing his e-mails during an hour-long commute by train, or a student doing her homework at the university coffee shop during a break between lectures.
  • Immediacy – Carrying a laptop means having instant access to various information, personal and work files. Immediacy allows better collaboration between coworkers or students, as a laptop can be flipped open to present a problem or a solution anytime, anywhere.
  • Up-to-date information – If a person has more than one desktop PC, a problem of synchronization arises: changes made on one computer are not automatically propagated to the others. There are ways to resolve this problem, including physical transfer of updated files (using a USB stick or CDs) or using synchronization software over the Internet. However, using a single laptop at both locations avoids the problem entirely, as the files exist in a single location and are always up-to-date.
  • Connectivity – A proliferation of Wi-Fi wireless networks and cellular broadband data services (HSDPA, EVDO and others) combined with a near-ubiquitous support by laptops means that a laptop can have easy Internet and local network connectivity while remaining mobile. Wi-Fi networks and laptop programs are especially widespread at university campuses.
    Other advantages of laptops include:
  • Size – Laptops are smaller than standard PCs. This is beneficial when space is at a premium, for example in small apartments and student dorms. When not in use, a laptop can be closed and put away.
  • Ease of Access - Most laptops have doors on the underside that allow the user to access the memory, hard drive and other components, by simply fliping the laptop to access the doors. For desktops the user must usually access the backside of the computer, which is harder if it's in an area with little space.
  • Low power consumption – Laptops are several times more power-efficient than desktops. A typical laptop uses 20-90 W, compared to 100-800 W for desktops. This could be particularly beneficial for businesses (which run hundreds of personal computers, multiplying the potential savings) and homes where there is a computer running 24/7 (such as a home media server, print server, etc.)
  • Quiet – Laptops are often quieter than desktops, due both to the components (quieter, slower 2.5-inch hard drives) and to less heat production leading to use of fewer and slower cooling fans.
  • Battery – a charged laptop can run several hours in case of a power outage and is not affected by short power interruptions and brownouts. A desktop PC needs a UPS to handle short interruptions, brownouts and spikes; achieving on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS.
  • All-in-One - designed to be portable, laptops have everything integrated in to the chassis. For desktops (excluding all-in-ones) this is divided into the desktop, keyboard, mouse, display, and optional peripherals such as speakers, and a webcam. This leads to lots of wiring. It can also lead to massive power consumption.
    Extras - in comparison to low-end desktops, even low-end laptops include features such as a Wi-Fi card, and Express Card slot, and a memory card reader.

Wednesday, July 8, 2009

History of computing

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.
The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.
The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the key punch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine."
George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November of 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.


A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.
Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.[citation needed]
The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact then its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.
The secret British Colossus computers (1943),[12] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.

Microprocessors are miniaturized devices that often implement stored program CPUs.
Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Modern smartphones are fully-programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.

Computer

A computer is a machine that manipulates data according to a set of instructions.
Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

Memory of the computer

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.
In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Friday, June 19, 2009

Science

Science (from the Latin scientia, meaning "knowledge") refers to any systematic knowledge-base or prescriptive practice that is capable of resulting in a prediction or predictable type of outcome. In this sense, science may refer to a highly skilled technique or practice.
In its more restricted contemporary sense, science refers to a system of acquiring knowledge based on scientific method, and to the organized body of knowledge gained through such research. This article focuses on the more restricted use of the word. Science as discussed in this article is sometimes called experimental science to differentiate it from applied science - the application of scientific research to specific human needs - although the two are often interconnected.
Science is a continuing effort to discover and increase human knowledge and understanding through disciplined research. Using controlled methods, scientists collect observable evidence of natural or social phenomena, record measurable data relating to the observations, and analyze this information to construct theoretical explanations of how things work. The methods of scientific research include the generation of hypotheses about how phenomena work, and experimentation that tests these hypotheses under controlled conditions. Scientists are also expected to publish their information so other scientists can do similar experiments to double-check their conclusions. The results of this process enable better understanding of past events, and better ability to predict future events of the same kind as those that have been tested

Basic classifications
Scientific fields are commonly classified along two major lines: natural sciences, which study natural phenomena (including biological life), and social sciences, which study human behavior and societies. These groupings are empirical sciences, which means the knowledge must be based on observable phenomena and capable of being tested for its validity by other researchers working under the same conditions. There are also related disciplines that are grouped into interdisciplinary and applied sciences, such as engineering and health science. Within these categories are specialized scientific fields that can include elements of other scientific disciplines but often possess their own terminology and body of expertise.
Mathematics, which is sometimes classified within a third group of science called formal science, has both similarities and differences with the natural and social sciences. It is similar to empirical sciences in that it involves an objective, careful and systematic study of an area of knowledge; it is different because of its method of verifying its knowledge, using a priori rather than empirical methods. Formal science, which also includes statistics and logic, is vital to the empirical sciences. Major advances in formal science have often led to major advances in the empirical sciences. The formal sciences are essential in formulating and evaluating hypotheses, theories, and laws, both in discovering and describing how things work (natural sciences) and how people think and act (social sciences).

Technology

Technology is a broad concept that deals with an animal species' usage and knowledge of tools and crafts, and how it affects an animal species' ability to control and adapt to its environment. Technology is a term with origins in the Greek "technologia", "τεχνολογία" — "techne", "τέχνη" ("craft") and "logia", "λογία" ("saying"). However, a strict definition is elusive; "technology" can refer to material objects of use to humanity, such as machines, hardware or utensils, but can also encompass broader themes, including systems, methods of organization, and techniques. The term can either be applied generally or to specific areas: examples include "construction technology", "medical technology", or "state-of-the-art technology".
The human species' use of technology began with the conversion of natural resources into simple tools. The prehistorical discovery of the ability to control fire increased the available sources of food and the invention of the wheel helped humans in travelling in and controlling their environment. Recent technological developments, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. However, not all technology has been used for peaceful purposes; the development of weapons of ever-increasing destructive power has progressed throughout history, from clubs to nuclear weapons.
Technology has affected society and its surroundings in a number of ways. In many societies, technology has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of the Earth and its environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.
Philosophical debates have arisen over the present and future use of technology in society, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar movements criticise the pervasiveness of technology in the modern world, claiming that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. Indeed, until recently, it was believed that the development of technology was restricted only to human beings, but recent scientific studies indicate that other primates and certain dolphin communities have developed simple tools and learned to pass their knowledge to other generations.

Science, Engineering & Technology

The distinction between science, engineering and technology is not always clear. Science is the reasoned investigation or study of phenomena, aimed at discovering enduring principles among elements of the phenomenal world by employing formal techniques such as the scientific method. Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability and safety.
Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.
Technology is often a consequence of science and engineering — although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference. The exact relations between science and technology in particular have been debated by scientists, historians, and policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science. In immediate wake of World War II, for example, in the United States it was widely considered that technology was simply "applied science" and that to fund basic science was to reap technological results in due time. An articulation of this philosophy could be found explicitly in Vannevar Bush's treatise on postwar science policy, Science—The Endless Frontier: "New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature... This essential new knowledge can be obtained only through basic scientific research." In the late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks (initiatives resisted by the scientific community). The issue remains contentious—though most analysts resist the model that technology simply is a result of scientific research.