To use this website fully, you first need to accept the use of cookies. By agreeing to the use of cookies you consent to the user of functional cookies. For more information read this page.

The biggest hog in ZPE

11 May 2017 at 16:47
ZPE has been gradually getting faster and faster due to more optimisations at compiler time, however, this month I began to really delve into the deep end and found some new ways that ZPE can be optimised. In this post I will discuss exactly what I'm going to do in order to do this.

First things first, what's the biggest, slowest operation in ZPE? The answer to that is file reads and writes.

So to optimise ZPE for writing and reading files, the best option is to reduce these. Just as with Chrome with many Chrome Extensions, having many ZPE plugins and start up configurations slows start up. Each of these configurations needs to be pushed to any children that this ZPE instance has, which in turn reduces the performance.

So what's the solution. So ZPE has always been fast enough to start and compile and even interpret. However, when ZPE spawns a child, or even a thread, the worker child (isn't it a cruel world where a couple of milliseconds into a child's life it's given work to do? :P) then has to access all of it's parent's properties. This hinders the performance of the parent more than the child. 

So what's the solution, I will move a lot of the file reads to the main executable and perform these once. Done.

Number 2, condition checking. I have noticed the difference in performance between the loops in ZPE and in native Java. This is the number one issue in ZPE. Whilst ZPE still manages to do this reasonably quickly, it does not compare to native Java.

Why is this? ZPE currently has no optimisation on conditions. This means a static condition such as $i < 10 will still need to be re-evaluated each loop. Now for the fun. How can this be changed? Well I'm not going to reveal everything until I've implemented it but I will give you and idea that it will use compiler based optimisation to optimise the condition beforehand.

Number 3. Function calls. Function calls are extremely quick but are also incredibly slow. Function calls were optimised with version 1.5 after being modified in version 1.4 to use a mapping system. There is a point where all the functions collide and we have a problem, an O(N) problem to be precise! I will look at improving the hash to be more drastic. Of course, this has an effect on memory so it needs to be worth while to do it.

Also, I intend to merge a few things. Namely functions and constants. What?! You don't need to understand but it should improve memory usage.