Computer Science, asked by genicsd, 22 days ago

Question Title: Von Neumann Bottleneck:

Suppose, there is a legacy system utilizing the Von Neumann architecture for the past twenty years of sales forecast. But due to the increasing amount of data and processing requirements, the Von Neumann Bottleneck has reduced the overall system’s performance. Among the many ways, you need to select either “in-memory processing” or providing a “cache” between the processor and main memory to overcome the Von Neumann bottleneck. The factors to be considered are cost, floating point operations, and scalability.

Your selection must be supported by logical arguments.​

Answers

Answered by SadArmyGirl
18

Answer:

In a Harvard architecture, there is no need to make the two memories share characteristics. In particular, the word width, timing, implementation technology, and memory address structure can differ. In some systems, instructions for pre-programmed tasks can be stored in read-only memory while data memory generally requires read-write memory. In some systems, there is much more instruction memory than data memory so instruction addresses are wider than data addresses.

Explanation:

 \:

if it helps mark the answer as brainliest

Answered by alishbaamir21
3

Answer:

In-Memory Computing has evolved because traditional solutions, typically based on disk storage and relational databases using SQL query language, are inadequate for today’s business intelligence (BI) needs – namely the provision of super-fast computing and scaling of data in real-time.

Explanation:

In-Memory Computing is based on two main principles: the way data is stored and scalability – the ability of a system, network or process to handle constantly growing amounts of data, or its potential to be elastically enlarged to accommodate that growth. This is achieved by leveraging two key technologies: random-access memory (RAM) and parallelization.

High Speed and Scalability: To achieve high speed and performance, In-Memory Computing is based on RAM data storage and indexing. This results in data processing and querying at more than 100 times faster than any other solution, delivering optimal and uncompromised performance and scalability for any given task.

For scalability – which is essential for big data processing – In-Memory Computing is based on parallelized distributed processing. In contrast to a single, centralized server managing and providing processing capabilities to all connected systems, distributed data processing offers a computer-networking method in which multiple computers across different locations share computer-processing capabilities.

Similar questions
Math, 11 days ago