Computer Science, asked by Ishu307, 11 months ago

define barchart and filter in spreedsheet .(2 marks)
class x

plz answer this question as soon as possible

Answers

Answered by kingitaat
0

huge machine with a huge power requirement and two major disadvantages. Maintenance was extremely difficult as the tubes broke down regularly and had to be replaced, and also there was a big problem with overheating. The most important limitation, however, was that every time a new task needed to be performed the machine need to be rewired. In other words programming was carried out with a soldering iron. In the late 1940's John von Neumann (at the time a special consultant to the ENIAC team) developed the EDVAC (Electronic Discrete Variable Automatic Computer) which pioneered the "stored program concept". This allowed programs to be read into the computer and so gave birth to the age of general-purpose computers. The Generations of Computers It used to be quite popular to refer to computers as belonging to one of several "generations" of computer. These generations are:- The First Generation (1943-1958): This generation is often described as starting with the delivery of the first commercial computer to a business client. This happened in 1951 with the delivery of the UNIVAC to the US Bureau of the Census. This generation lasted until about the end of the 1950's (although some stayed in operation much longer than that). The main defining feature of the first generation of computers was that vacuum tubes were used as internal computer components. Vacuum tubes are generally about 5-10 centimeters in length and the large numbers of them required in computers resulted in huge and extremely expensive machines that often broke down (as tubes failed). The Second Generation (1959-1964): In the mid-1950's Bell Labs developed the transistor. Transistors were capable of performing many of the same tasks as vacuum tubes but were only a fraction of the size. The first transistor-based computer was produced in 1959. Transistors were not only smaller, enabling computer size to be reduced, but they were faster, more reliable and consumed less electricity. The other main improvement of this period was the development of computer languages. Assembler languages or symbolic languages allowed programmers to specify instructions in words (albeit very cryptic words) which were then translated into a form that the machines could understand (typically series of 0's and 1's: Binary code). Higher level languages also came into being during this period. Whereas assembler languages had a one-to-one correspondence between their symbols and actual machine functions, higher level language commands often represent complex sequences of machine codes. Two higher-level languages developed during this period (Fortran and Cobol) are still in use today though in a much more developed form. The Third Generation (1965-1970): In 1965 the first integrated circuit (IC) was developed in which a complete circuit of hundreds of components were able to be placed on a single silicon chip 2 or 3 mm square. Computers using these IC's soon replaced transistor based machines. Again, one of the major advantages was size, with computers becoming more powerful and at the same time much smaller and cheaper. Computers thus became accessible to a much larger audience. An added advantage of smaller size is that electrical signals have much shorter distances to travel and so the speed of computers increased. feature of this period is that computer software became much more powerful and flexible and for the first time more than one program could share the computer's resources at the same time (multi-tasking). The majority of programming languages used today are often referred to as 3GL's (3rd generation languages) even though some of them originated during the 2nd generation.

The Fourth Generation (1971-present): The boundary between the third and fourth generations is not very clear-cut at all. Most of the developments since the mid 1960's can be seen as part of a continuum of gradual miniaturisation. In 1970 large-scale integration was achieved where the equivalent of thousands of integrated circuits were crammed onto a single silicon chip. This development again increased computer performance (especially reliability and speed) whilst reducing computer size and cost. Around this time the first complete general-purpose microprocessor became available on a single chip. In 1975 Very Large Scale Integration (VLSI) took the process one step further. Complete computer central processors could now be built into one chip. The microcomputer was born. Such chips are far more powerful than ENIAC and are only about 1cm square whilst ENIAC filled a large building. During this period Fourth Generation Languages (4GL's) have come into existence. Such languages are a step further removed from the computer hardware in that they use language much like natural language. Many database languages can be described as 4GL's. They are generally much easier to learn than are 3GL's.

Similar questions