You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From a cursory look, I think this is the cost of the cache key ref getting created for each iteration, which accounts for all the extra allocations even though the key of course remains the same for each pass.
Fixing this should have a fairly large impact on OPA's evaluation performance generally, as global references in iteration is extremely common. An easy solution would of course be having the compiler rewrite the first example to the second one... but not recreating the cache key for the virtual cache key with each lookup is probably the more robust option.
The text was updated successfully, but these errors were encountered:
And as a pleasant side-effect, somewhat improved performance when referencing
local vars as well :)
- Make integer equality check of `Number`s cheaper by not delegating
to Compare unless required
- Fix mistake where IndexResult structs never got put back into the
`sync.Pool` they came from. Oops!
- Slightly cheaper `numbers.range` by avoiding allocating an error
- Add `*eval.unknownRef` method to avoid allocations from `any` boxing
of refs
I've also added a benchmark to easily try compare the costs of these two lookup types,
and I'm pretty happy with the results:
**main**
```
global_ref-12 20461 57639 ns/op 24473 B/op 903 allocs/op
local_var-12 42226 28495 ns/op 13363 B/op 410 allocs/op
```
**change**
```
global_ref-12 25377 46168 ns/op 14627 B/op 496 allocs/op
local_var-12 46336 25671 ns/op 11488 B/op 300 allocs/op
```
Fixesopen-policy-agent#7654
While there are still potential improvements to be made, and referencing a global rule
is still more expensive than a local var, the difference is now small enough that the
global -> local doesn't feel warranted for anything but the most demanding requirements.
For that reason, I consider this fixed. The benchmarks are great for anyone looking to
improve this further :)
Signed-off-by: Anders Eknert <anders@styra.com>
I was mistaken wrt cache keys — that's a cost related to calling built-in functions, which I was benchmarking just earlier. See the PR linked for some actual numbers.
Using a global ref in iteration costs about twice as much as referencing a local variable.
Doing a global -> local reassignment is obviously silly, but effectively halves the cost of the iteration:
From a cursory look, I think this is the cost of the cache key ref getting created for each iteration, which accounts for all the extra allocations even though the key of course remains the same for each pass.
Fixing this should have a fairly large impact on OPA's evaluation performance generally, as global references in iteration is extremely common. An easy solution would of course be having the compiler rewrite the first example to the second one... but not recreating the cache key for the virtual cache key with each lookup is probably the more robust option.
The text was updated successfully, but these errors were encountered: