-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Persistent live queries, surviving to offline periods #9786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
🚀 Thanks for opening this issue! |
Any suggestions how this could be implemented, on a high level? For example the client sending a |
We can reach this goal with minimal changes to the server and client SDKs. Current behavior:
Additionally, there is no clear SDK method to quickly determine if the WebSocket connection is active and reliable. Furthermore, when the WS connection is lost and restored, there is no documented strategy to identify whether there was a disconnection period that may have resulted in lost data, and therefore, if a query is needed. Desired behavior for Subscriptions marked as persistent:
This approach minimizes changes on the client side and lets the server maintain responsibility for keeping clients in sync—even across offline periods. There is no need, in my view, to use a lastSyncDate parameter. If we want to achieve even more reliability, we should keep the event in queue until an ACK is received. |
Do we really want to cache every missed event and then replay them sequentially when the client reconnects? That includes obsolete events, for example, a field value that changes N times would cache N events, but N-1 events end up irrelevant. This approach adds noise but it also risks serious scalability issues. Multiply many clients by many events, and you get hyperbolic cache growth, potentially triggering huge fluctuations in data storage, cache writes, and network traffic. Imagine a whole network region goes down temporarily with many clients, that could become a meltdown scenario for the server side infrastructure. Also, with a TTL that you mentioned - that means the client receives N-x events with x being the events discarded by TTL. If a use case allows for that, why not always discard the obsolete N-1 events. I would argue for a mechanism that just takes into account a
Clearly, there is a theoretical use case for any approach, but from a practical POV I think not caching anything by discarding obsolete events is useful for more applications than caching everything. If the use case requires recording of all value changes, then maybe the better approach is to store these value changes in a class and subscribe to it. For example a switch that is flipped many times and we need to know all the times it flipped, regardless of its final state - that should not be handled by LiveQuery by caching flips, but written to a class for persistent storage. Same for processing queue-like behavior, I don't see that as a LiveQuery use case, because there are more efficient ways to do that. |
There are advantages and trade-offs with each approach. Let’s break them down. 1. Caching each eventAll event-driven protocols (MQTT, PubSub, XMPP) rely on some form of spool/queue. These are logic-agnostic but storage-sensitive. This means fewer logic-related failures, and one table can store all events—one query is enough on reconnect, regardless of subscription count. PROS: Low logic overhead; Single source of truth Obsolete data can be mitigated by keeping only the latest version of the object ## 2. Send modified objects on reconnect PROS: No event storage required; No redundancy 3. Cache only event metadata (no payload)Instead of storing full event data, cache only the Class Name, Event type, and Object ID. PROS: Lightweight storage; Still tracks all changes Extra Considerations - regardless of the chosen solution
|
Yes, not sure now that could be addressed at first glance. Having to query all subscribed classes is still negligible compared to building up and managing a cache backlog. What still puzzles me is the TTL option. If an app can deal with lost events, why would it need a full object history in the first place? The lost events are unrecoverable, for example there's no way of recovering a field's previous values beyond what's cached. In any case, LiveQuery could offer the options so the developer can set the caching strategy according to their needs.
|
Current Limitation
To ensure data remains synchronized on my client, I must perform several queries and subscribe to live queries whenever the app returns online after being offline even for a short period.
Feature / Enhancement Description
I would like to implement persistent live queries that survive periods of offline operation. When the app comes back online, the server should restore existing subscriptions and push all the events that occurred during the offline period to the client.
Example Use Case
Alternatives / Workarounds
Currently, I need to perform a query.find() operation to fetch all books since my last connection:
Imagine having Books, Authors, Categories, Tags, Orders, FavoriteBooks, etc... This approach is inefficient as it requires a REST call for each class.
3rd Party References
Firebase Realtime Database and Realm provide similar offline persistence capabilities:
Implementing this feature would bring Parse Platform more in line with these competing solutions.
The text was updated successfully, but these errors were encountered: