what can i write for the conclusion of a java project
saurabh000345:
please elaborate on what you want to know... Be much more precise
Answers
Answered by
0
Conclusion
We have explored the practicality of doing parallel computing in Java, and of providing Java interfaces to High Performance Computing software. Java sits on a virtual machine model significantly different to the hardware-oriented model that C or Fortran exploit directly. Java discourages or prevents direct access to the some of the fundamental resources of the underlying hardware.
Our earliest experiments in this direction involved working entirely within Java, building new software on top of the communication facilities of the standard API. The work in Chapters 3 and 4involved creating a Java interface to an existing HPC package. In the long term Java may become a major implementation language for large software packages like MPI. It certainly has advantages in respect of portability that could simplify implementations dramatically. In the immediate term recoding these packages does not appear so attractive. Java wrappers to existing software look more sensible. On a cautionary note, our experience with MPI suggests that interfacing Java to non-trivial communication packages may be less easy than it sounds. Nevertheless, we intend to create a Java interface to an existing run-time library for data parallel computation.
So is Java, as it stands, a good language for High Performance Computing?
It still has to be demonstrated that Java can be compiled to code of efficiency comparable with C or Fortran. Many avenues are being followed simultaneously towards a higher performance Java. For example, IBM is developing an optimizing Java compiler that produces binary code directly.
Our final interface to MPI is quite elegant, and provides much of the functionality of the standard C and Fortran bindings. But creating this interface was a more difficult process than one might hope, both in terms of getting a good specification, and in terms of making the implementation work. We noted that the lack of features like C++ templates (or any form of parametric polymorphism) and user-defined operator overloading (available in many modern languages) made it difficult to produce a completely satisfying interface to a data parallel library. The Java language as currently defined imposes various limits to the creativity of the programmer.
In many respects Java is undoubtedly a better language than Fortran. It is object-oriented and highly dynamic, and there is every reason to suppose that such features will be as valuable in scientific computing as in any other programming discipline. But to displace established scientific programming languages Java will probably have to acquire some of the facilities taken for granted in those languages.
In this dissertation, we have discussed motivations for introducing HPJava, an HPspmd programming model. HPJava language extensions provide much of the expressive power of HPF, but in a strictly SPMD environment with a good communication library. It allows programs to combine data parallel code and SPMD library calls directly. Because of the relatively low-level programming model, interfacing to other parallel-programming paradigms is more natural than in HPF. With suitable care, it is possible to make direct calls to, say, MPI from within the data parallel program. The object-oriented features of Java are also exploited to give an elegant parameterization of the distributed arrays of the extended language.
We have discussed the design and development of mpiJava--a pure Java interface to MPI. mpiJava provides a fully functional and efficient Java interface to MPI. When used for distributed computing the current implementation of mpiJava does not impose a huge overhead on-top of the native MPI interface. Interfacing Java to MPI is not always trivial--in earlier implementation we often saw low-level conflicts between the Java runtime and the interrupt mechanisms used in the MPI implementations. The new native thread feature in JDK 1.2 has eliminated the interrupt problem that we encountered with earlier releases of the JDK. mpiJava is now stable on UNIX and Linux platforms using MPICH and JDK 1.2 . The syntax of mpiJava is easy to understand and use, thus making it relatively simple for programmers with either a Java or Scientific background to take up. We believe that mpiJava will also provide a popular means for teaching students the fundamentals of parallel programming with MPI.
We have explored the practicality of doing parallel computing in Java, and of providing Java interfaces to High Performance Computing software. Java sits on a virtual machine model significantly different to the hardware-oriented model that C or Fortran exploit directly. Java discourages or prevents direct access to the some of the fundamental resources of the underlying hardware.
Our earliest experiments in this direction involved working entirely within Java, building new software on top of the communication facilities of the standard API. The work in Chapters 3 and 4involved creating a Java interface to an existing HPC package. In the long term Java may become a major implementation language for large software packages like MPI. It certainly has advantages in respect of portability that could simplify implementations dramatically. In the immediate term recoding these packages does not appear so attractive. Java wrappers to existing software look more sensible. On a cautionary note, our experience with MPI suggests that interfacing Java to non-trivial communication packages may be less easy than it sounds. Nevertheless, we intend to create a Java interface to an existing run-time library for data parallel computation.
So is Java, as it stands, a good language for High Performance Computing?
It still has to be demonstrated that Java can be compiled to code of efficiency comparable with C or Fortran. Many avenues are being followed simultaneously towards a higher performance Java. For example, IBM is developing an optimizing Java compiler that produces binary code directly.
Our final interface to MPI is quite elegant, and provides much of the functionality of the standard C and Fortran bindings. But creating this interface was a more difficult process than one might hope, both in terms of getting a good specification, and in terms of making the implementation work. We noted that the lack of features like C++ templates (or any form of parametric polymorphism) and user-defined operator overloading (available in many modern languages) made it difficult to produce a completely satisfying interface to a data parallel library. The Java language as currently defined imposes various limits to the creativity of the programmer.
In many respects Java is undoubtedly a better language than Fortran. It is object-oriented and highly dynamic, and there is every reason to suppose that such features will be as valuable in scientific computing as in any other programming discipline. But to displace established scientific programming languages Java will probably have to acquire some of the facilities taken for granted in those languages.
In this dissertation, we have discussed motivations for introducing HPJava, an HPspmd programming model. HPJava language extensions provide much of the expressive power of HPF, but in a strictly SPMD environment with a good communication library. It allows programs to combine data parallel code and SPMD library calls directly. Because of the relatively low-level programming model, interfacing to other parallel-programming paradigms is more natural than in HPF. With suitable care, it is possible to make direct calls to, say, MPI from within the data parallel program. The object-oriented features of Java are also exploited to give an elegant parameterization of the distributed arrays of the extended language.
We have discussed the design and development of mpiJava--a pure Java interface to MPI. mpiJava provides a fully functional and efficient Java interface to MPI. When used for distributed computing the current implementation of mpiJava does not impose a huge overhead on-top of the native MPI interface. Interfacing Java to MPI is not always trivial--in earlier implementation we often saw low-level conflicts between the Java runtime and the interrupt mechanisms used in the MPI implementations. The new native thread feature in JDK 1.2 has eliminated the interrupt problem that we encountered with earlier releases of the JDK. mpiJava is now stable on UNIX and Linux platforms using MPICH and JDK 1.2 . The syntax of mpiJava is easy to understand and use, thus making it relatively simple for programmers with either a Java or Scientific background to take up. We believe that mpiJava will also provide a popular means for teaching students the fundamentals of parallel programming with MPI.
Similar questions