What do you understand by display processor?
Answers
Answered by
3
Also, interestingly, while “Display Processors” were common solutions for displays, in the era when computing was dominated by mainframes and mini-computers, the first personal computers (PCs), did not design in similar display-oriented chips or “Display Processors”.
That’s because it was still inherently expensive to build CPUs (or processors), even one dedicated to a display. Instead, PCs used simpler chips tasked more as “video controllers”, which had two main modes: text and graphics. [ Some very basic PCs did not even have graphics modes, but as graphics was such an attraction, even the most basic PCs had at least an option for rudimentary graphics, even if the resolution was very poor, such as around 200x200 pixels ]
In text mode, they took the ASCII text and displayed them on a fixed grid (say 40x25 or 80x25) on the screen. In graphics mode, they refreshed the display from the frame buffer (an area of memory accessible by both the CPU and the video controller, set aside for the screen), converting the digital data to signals. They did not use a more complex scheme of handling graphics as the “Display Processors” handled, such as being able to draw lines or filled shapes. Kept it simple.
The CPU was solely in charge of writing to either the text buffer or the bitmap frame buffer, without any other assistance. The “video controller” took care of keeping the screen refreshed. The only “intelligence” it had was it had a “character set” of the shapes of each alphabet (A, B, C), and so the CPU only had to command the character and let the video controller draw it from its set. But for graphics, the CPU handled every bit. No features to simply just draw lines from one coordinate to another, for example.
This architecture was done on the most if not all of the first round of 8-bit and 16-bit PCs, such as the Apple II and IBM PC with the CGA, EGA, and even VGA graphics adapters (VGA actually stood for Video Graphics Array). A few of these systems, though, supported rudimentary graphics acceleration in the form of “sprites”, small bitmapped areas that could be manipulated as a block, i.e. CPU could command the graphics controller to draw this block at that coordinate, then rapidly erase and redraw, to animate or move the block. The most common use case for such “sprites”, was for gaming, i.e. to move game characters (think Pac Man) around the screen rapidly without forcing the CPU to have to draw every bit.
This architecture actually progressed very far to the stage where the CPU, running software, became capable of rendering or generating every bit, if needed. In fact, you can still run modern operating systems such as Windows, Linux, or Mac OS, in a “non-accelerated” mode, where they are rendering every bit and only writing bits to a frame buffer which are then refreshed on the screen at 60 Hz.
As in, instead of an “in-between” sort of Display Processor that only handles rudimentary stuff, we either have “all” or “nothing”, where “All” is a highly sophisticated GPU, again, like a CPU that is specially designed for graphical operations, or just a plain frame buffer for software running on the main CPU to write to. It turns out the main CPU is so fast, we can reasonably support a graphical display (obviously not for 3D rendered games, but reasonable for graphical user interfaces) without the need for a rudimentary display processor.
And of course, today, no one realistically uses mainframes or mini-computers for just running a graphical user-interface with, they interact with them via PCs if needed which already have GPUs, so I think, that’s why “Display Processors” as a concept went away after the PC generation of computing.
_________________X__________________^_^
That’s because it was still inherently expensive to build CPUs (or processors), even one dedicated to a display. Instead, PCs used simpler chips tasked more as “video controllers”, which had two main modes: text and graphics. [ Some very basic PCs did not even have graphics modes, but as graphics was such an attraction, even the most basic PCs had at least an option for rudimentary graphics, even if the resolution was very poor, such as around 200x200 pixels ]
In text mode, they took the ASCII text and displayed them on a fixed grid (say 40x25 or 80x25) on the screen. In graphics mode, they refreshed the display from the frame buffer (an area of memory accessible by both the CPU and the video controller, set aside for the screen), converting the digital data to signals. They did not use a more complex scheme of handling graphics as the “Display Processors” handled, such as being able to draw lines or filled shapes. Kept it simple.
The CPU was solely in charge of writing to either the text buffer or the bitmap frame buffer, without any other assistance. The “video controller” took care of keeping the screen refreshed. The only “intelligence” it had was it had a “character set” of the shapes of each alphabet (A, B, C), and so the CPU only had to command the character and let the video controller draw it from its set. But for graphics, the CPU handled every bit. No features to simply just draw lines from one coordinate to another, for example.
This architecture was done on the most if not all of the first round of 8-bit and 16-bit PCs, such as the Apple II and IBM PC with the CGA, EGA, and even VGA graphics adapters (VGA actually stood for Video Graphics Array). A few of these systems, though, supported rudimentary graphics acceleration in the form of “sprites”, small bitmapped areas that could be manipulated as a block, i.e. CPU could command the graphics controller to draw this block at that coordinate, then rapidly erase and redraw, to animate or move the block. The most common use case for such “sprites”, was for gaming, i.e. to move game characters (think Pac Man) around the screen rapidly without forcing the CPU to have to draw every bit.
This architecture actually progressed very far to the stage where the CPU, running software, became capable of rendering or generating every bit, if needed. In fact, you can still run modern operating systems such as Windows, Linux, or Mac OS, in a “non-accelerated” mode, where they are rendering every bit and only writing bits to a frame buffer which are then refreshed on the screen at 60 Hz.
As in, instead of an “in-between” sort of Display Processor that only handles rudimentary stuff, we either have “all” or “nothing”, where “All” is a highly sophisticated GPU, again, like a CPU that is specially designed for graphical operations, or just a plain frame buffer for software running on the main CPU to write to. It turns out the main CPU is so fast, we can reasonably support a graphical display (obviously not for 3D rendered games, but reasonable for graphical user interfaces) without the need for a rudimentary display processor.
And of course, today, no one realistically uses mainframes or mini-computers for just running a graphical user-interface with, they interact with them via PCs if needed which already have GPUs, so I think, that’s why “Display Processors” as a concept went away after the PC generation of computing.
_________________X__________________^_^
Similar questions
Physics,
7 months ago
Social Sciences,
7 months ago
Math,
7 months ago
English,
1 year ago
Math,
1 year ago