Description
Java API client version
7.17.9
Java version
19
Elasticsearch Version
7.17.5
Problem description
In our service, allocation rate with Java api client is dramatically higher that it used to be with an old high-level rest client: 550 mb/s vs 330 mb/s (please, find the graph below). Now, it takes more than 44% of all service allocations.
Most of these allocations are done while response deserialization. And especially, while JsonValue.to call.
Our service sends a lot of search requests to Elasticsearch. We have Source disabled and use doc value fields to get data. Actually, each hit has 2 fields only (and we may have up to 200 hits per request). And the allocation rate looks a little bit frantic.
I'm not sure what is the reason exactly. But the issue is definitely in the JsonDataImpl.to method. If I instead of JsonData.to method use JsonData.toString and then parse this string in my code locally, it works fine. You can see in the second graph below that this workaround solves completely the allocation issue.
I'm also attaching an excerpt from the flame graph.
Could you please look at it? Do you have plans to optimize JsonDataImpl? I see the comment in the code of JsonDataImpl: "// FIXME: inefficient roundtrip through a string. Should be replaced by an Event buffer structure." May it be related?
Allocation rate when switching from new the client to the old client and back to the new one
Allocation rate when switching (using new client) from JsonData.toString -> JsonData.to -> JsonData.toString