Turing machine model of parallel computation
Answers
Answered by
0
Parallel computing has become a bedrock in the HPC field, where applications are becoming increasingly complex and such compute-intensive technologies as data analytics, deep learning and artificial intelligence (AI) are rapidly emerging. Nvidia and AMD have driven the adoption of GPU accelerators in supercomputers and other high-end systems, Intel is addressing the space with its many-core Xeon Phi processors and coprocessors and, as we’ve talked about at The Next Platform, other acceleration technologies like field-programmable gate arrays(FPGAs) are pushing their way into the picture. Parallel computing is a booming field.
However, the future was not always so assured. The field of parallel computing saw strong growth from the 1960s into the early 1990s, a multi-decade period that coincided with rise of research into the development of AI technologies. That first era of significant progress in parallel computing waned in the 1990s, enough that Ken Kennedy, a proponent of the technology, in 1994 gave a speech questioning whether parallel computing was coming to an end. At the same time, sequential computing – based on the von Neumann architecture – continued to see tremendous success since the first machine based on the model, called the EDVAC, debuted in 1946.
As noted above, since the mid-2000s, development of parallel computing has seen a strong rebirth. That said, a group of researchers from the Brown University, the University of Delaware and Tsinghua University in Beijing believe that the challenges that tripped up parallel computing in the mid-1990s shouldn’t be ignored even as the field has rapidly expanded over the past decade. In a recent research paper, the compute engineers argued that the lessons of the past should be studied to ensure that research into parallel computing doesn’t falter again and look at what has helped make sequential computing the roaring success that it’s been.
“We should remember the low points of the field more than 20 years ago and review the lesson that has led to the question at that point whether ‘parallel computing will soon be relegated to the trash heap reserved for promising technologies that never quite make it,’” the researchers wrote in their study, noting Kennedy’s concerns. “Facing the new era of parallel computing, we should learn from the robust history of sequential computation in the past 60 years.”
An interesting aside – at the same time that parallel computing was hitting a wall in the 1990s, doubts also were surfacing about the viability of AI. Such sources of public funds as DARPA and the National Science Foundation as well as private funders questioned the possibility of creating true AI. “The correlate fate of parallel computing and AI reveals an important fact —the tremendous computation demand of AI has inspired the advance of parallel computer architectures,” the researchers wrote. “A well known example is the impressive parallel machine products pursued by Thinking Machine Corporation in early 90s— its glorious (but short) history and its failure.”
The success of sequential computing was primarily based on the combination of the Turing machine model and the von Neumann architecture model that they wrote “specifies an abstract machine architecture to efficiently support the Turing machine model.” Based on this, the authors in their study outlined a proposed parallel Turing machine model that brings some of the key pillars of sequential computing systems – the concept of a simple, solid, easily understandable and broadly accepted sequential programming model – to a parallel system.
“Lacking a solid yet intuitive parallel Turing machine model will continue to be a serious challenge in the future parallel computing,” they wrote in their study, titled “Parallel Turing Machine, a Model.” They noted that “our proposed parallel Turing machine (PTM) model can preserve, whenever possible, those attractive properties of sequential computation — understandability, predictability and determinism — under parallel computation.”
In their own parallel Turing machine (PTM) model, the researchers hope to ensure the key properties of sequential computing – understandability, predictability and determinism – in a parallel model. Those properties necessary for parallel programs need to outlined in a simple program execution model (PXM) that isn’t based on threads. The PXM needs to fill the bill as an efficient abstract architecture model and should provide both synchronous and asynchronous parallel program execution to support open environments to handle events from inside the machine and those created outside of the system. In addition, it also needs a robust interface to support high-level programming models and language processing. To hit all these goals, the authors propose a “program execution model based on an asynchronous execution model — codelet model, which [is] rooted in the dataflow model. We employ the event-driven codelet model and the memory model.”
However, the future was not always so assured. The field of parallel computing saw strong growth from the 1960s into the early 1990s, a multi-decade period that coincided with rise of research into the development of AI technologies. That first era of significant progress in parallel computing waned in the 1990s, enough that Ken Kennedy, a proponent of the technology, in 1994 gave a speech questioning whether parallel computing was coming to an end. At the same time, sequential computing – based on the von Neumann architecture – continued to see tremendous success since the first machine based on the model, called the EDVAC, debuted in 1946.
As noted above, since the mid-2000s, development of parallel computing has seen a strong rebirth. That said, a group of researchers from the Brown University, the University of Delaware and Tsinghua University in Beijing believe that the challenges that tripped up parallel computing in the mid-1990s shouldn’t be ignored even as the field has rapidly expanded over the past decade. In a recent research paper, the compute engineers argued that the lessons of the past should be studied to ensure that research into parallel computing doesn’t falter again and look at what has helped make sequential computing the roaring success that it’s been.
“We should remember the low points of the field more than 20 years ago and review the lesson that has led to the question at that point whether ‘parallel computing will soon be relegated to the trash heap reserved for promising technologies that never quite make it,’” the researchers wrote in their study, noting Kennedy’s concerns. “Facing the new era of parallel computing, we should learn from the robust history of sequential computation in the past 60 years.”
An interesting aside – at the same time that parallel computing was hitting a wall in the 1990s, doubts also were surfacing about the viability of AI. Such sources of public funds as DARPA and the National Science Foundation as well as private funders questioned the possibility of creating true AI. “The correlate fate of parallel computing and AI reveals an important fact —the tremendous computation demand of AI has inspired the advance of parallel computer architectures,” the researchers wrote. “A well known example is the impressive parallel machine products pursued by Thinking Machine Corporation in early 90s— its glorious (but short) history and its failure.”
The success of sequential computing was primarily based on the combination of the Turing machine model and the von Neumann architecture model that they wrote “specifies an abstract machine architecture to efficiently support the Turing machine model.” Based on this, the authors in their study outlined a proposed parallel Turing machine model that brings some of the key pillars of sequential computing systems – the concept of a simple, solid, easily understandable and broadly accepted sequential programming model – to a parallel system.
“Lacking a solid yet intuitive parallel Turing machine model will continue to be a serious challenge in the future parallel computing,” they wrote in their study, titled “Parallel Turing Machine, a Model.” They noted that “our proposed parallel Turing machine (PTM) model can preserve, whenever possible, those attractive properties of sequential computation — understandability, predictability and determinism — under parallel computation.”
In their own parallel Turing machine (PTM) model, the researchers hope to ensure the key properties of sequential computing – understandability, predictability and determinism – in a parallel model. Those properties necessary for parallel programs need to outlined in a simple program execution model (PXM) that isn’t based on threads. The PXM needs to fill the bill as an efficient abstract architecture model and should provide both synchronous and asynchronous parallel program execution to support open environments to handle events from inside the machine and those created outside of the system. In addition, it also needs a robust interface to support high-level programming models and language processing. To hit all these goals, the authors propose a “program execution model based on an asynchronous execution model — codelet model, which [is] rooted in the dataflow model. We employ the event-driven codelet model and the memory model.”
Similar questions
Political Science,
8 months ago
Political Science,
8 months ago
Science,
8 months ago
Physics,
1 year ago
Math,
1 year ago
CBSE BOARD XII,
1 year ago