Computer Science, asked by ginacagas, 5 months ago

it refers to the process by which a computer display images​

Answers

Answered by parthu2011
8

Answer:

Let’s go back a few years and talk about television. Computers used television monitors because there were lots around and they were cheap, rather than inventing a totally new way of displaying pictures.

The human eye sees things using light-sensitive cells in the retina. There are 3 types of colour-sensitive cells (code cells), red blue and green. Our brain interprets different proportions of those as different colours, e.g. red and green together as yellow. To have a computer display yellow, we don’t need it to actually display yellow, we only need it to show some red and green. That makes life simpler. Then, technology limited us as to how much resolution we could show, and now, we’re approaching the limit of what the human eye can resolve, because it only has so many cells in the retina. TV required a picture to be split into bits that could be sent over radio and reassembled; what was chosen was a series of horizontal lines (as opposed to, say, vertical lines or spirals or random squiggles). The camera sensor scans across a horizontal line and generates brightness and colour signals from left to right, then skips back and down and does the same for a new line, until after about 500 lines (or 630, or 1024) it has scanned a complete image, then it goes back to the top and starts again. On a colour TV, the screen is split into thousands of triplets of coloured phosphor dots, red, blue and green, so each line is composed of a few hundred dots, and the complete picture is made of a few hundred lines.

For a computer to generate an image to feed to a TV, it has to generate the colour and brightness signals in the right place. Computer memory is like a continuous line of ever-increasing address number, so it is mapped onto lines of the display. Locations 0–511 are the first line, 512–1023 the next line and so on. Each location has a 24-bit value, 3 bytes, each byte representing an amount of red, green and blue. All 0 means black, all 1’s means saturated white. There is a graphics chip which reads out the consecutive memory locations in the correct order, and runs the bytes through a digital to analog converter, e.g. 0 becomes 0V, 512 becomes 1V, 1024 becomes 2V - a voltage corresponding to brightness. In an old-style tube TV, those voltages controlled the intensity of an electron beam, one for each colour, which wrote onto an array of phosphor dots, causing them to emit visible light. In a modern LCD or LED monitor, the circuits emulate an old TV because that’s the easiest way to do it rather than reinventing everything from scratch.

So you have an area inside the computer memory that is mapped onto the monitor. If you create a program that writes hex 00 00 FF to a particular byte triplet, that will make a particular pixel on the screen turn blue. So if you want to make a letter on the screen, you just need to know the pattern of dots or vector strokes to make A, B, C etc. and then you use a library routine to do it over and over. When I build my first PC back in 1978 or so, I drew out letters on squared paper to design a font, and programmed the bits into memory one by one so that the computer would display text on a TV (by way of a UHF modulator, since real computer monitors were much too expensive).

if you are pleased with answer in turn

please subscribe my youtube chanel(Ramakrishna Nallangari youtube channel) for my effort

search for  

Ramakrishna Nallangari in search box

of youtube

Answered by Johnka
3

Answer: hindi ko makita ang sagot

Explanation:

Similar questions