World Languages, asked by Anushkad2229, 8 months ago

if we change the compiler of C++ and use an interpreter instead for this purpose, how will it affect C++ language in terms of processing speed, efficiency or other factors? Support your answer with solid points.

Answers

Answered by saranr
0

Explanation:

If we talk about efficiency in terms of time taken to execute the most simple instruction, then processing speed would be one of the factors determined by efficiency, so needless to consider anything other than the latter.

Now we simplified the question to:

In your opinion, if we interpret C++ code instead of compiling it to machine code, how will runtime efficiency be affected?

At this point, we may simply state that interpreted code is going to be less efficient than machine code, and it's almost always true, but not exactly always.

One step at a time:

That's not true it doesn't matter which language. Code is always to be parsed. The difference lays in level of abstraction, mostly. Machine code is the lowest–level possible, so it will run faster, even if interpreted, rather than a language like C++. Let's consider the RISC pipeline: “parsing" machine code basically consists of fetching and decoding instructions. The same concept applies to pretty much any existing processor. These steps happen at hardware level, at almost the speed of light (known physics boundaries to be considered.) Hypothetically, interpreting machine code will still be quite as much fast (most likely roughly 2 times slower), since machine code is very low–level, and parsing it at software level will still be easy to be done and is going to be very fast. Parsing a higher–level language, like C++ is totally different — not only it is going to be parsed at software level, but it's still a language with a syntax and semantics to be regarded, so the parser software is going to be very complex, and only parsing code to decide what to do is dozens of times slower, in the best case.

Interpreted code is not always less efficient. In the reality, more factors are to be considered than in a hypothetical situation. Perfection is hypothetically reachable, but factually it's not. Compilers are far from flawless, and it's particularly difficult to write one that directly turns very high–level code to machine one. That's why it's unlikely that languages like JavaScript will ever be native–compiled. That's also why LLVM exists — it allows to make source code lower level by representing it in AST, then to gradually work on the AST to turn it into sequential instruction that will be transformed to machine code and eventually optimized. However, again, optimization won't be perfect, and that's so much room of improvement when it comes to compilers… Interpreted code can end up being faster (when the interpreter being used is really well done) than code compiled badly. As an instance, the experimental LLVM Kotlin to Native compiler isn't still optimized at all. Not much production–ready, I'd say — hence the term experimental :) So, it happened that the same Kotlin code I once compiled to a native binary run slower than interpreted by the JVM — and you know, the JVM is surely not the best when it comes to performances.

That being said, most available C++ compilers are quite well done, and it'd be a nice challenge to come up with an interpreter capable of running C++ code faster than that — if not completely impossible.

Compiled code is almost always going to be more efficient. Interpreters are still one more layer of complexity — definitely not a good thing.

Furthermore, garbage collectors are among the main reasons why the JVM and the CLR are not extremely efficient, since it's still something running continuously but not in the exact moment when a value can be deleted, so memory will be overused, and CPU consumption will result increased by the GC itself. Anyway, in my opinion, CLR is way more efficient than the JVM, and it also allows unsafe code (when using languages that support it, such as C#), which can really make the difference when used appropriately, though it's a double–edged sword and can easily turn itself into a nightmare when in the wrong hands.

Obviously, most runtime environments, including the JVM and the CLR, actually implement a JIT compiler, so to turn most critical sections of code into machine code. Particularly, the CLR turns the bodies of the most called methods into machine code and stores them in the GAC (Global Assembly Cache), where they’re looked back for when any method is called — if a JIT compiled version of itself is cached, that’s used; otherwise, it is interpreted first, then it may be eventually JIT compiled.

Similar questions