If you have ever built a Progressive Web App (PWA), you already know how the story usually goes.
You add a manifest, generate a service worker, tick the installability boxes, and suddenly the app can sit on a phone home screen or launch in its own window on desktop. Very nice. Very shiny. Teknologia!
Then… reality strikes back.
Your app is authenticated. It pulls a lot of data from an API. Users expect it to keep working when connectivity gets weird, not just when Wi-Fi disappears in a perfectly cinematic way. They also expect it not to lie, not to show nonsense, and definitely not to trap them behind a spinner because one request is having a bad day.
That is the point where a PWA stops being a badge and starts becoming architecture (also here’s when things get complicated).
Recently, I worked on a Vue 3 frontend where that became painfully clear in the best possible way. We wanted proper installability, yes. But more importantly, we wanted the app to behave like a serious product when the network was unstable, when the session expired, or when a new deployment landed while old cached data was still hanging around like an uninvited guest, messing this up.
So this is not a “how to add a PWA in five minutes” piece, there are already enough of those. This is more about what happens when you try to make a Vue 3 PWA behave well in real life, on a complex multi-faceted application.
The frontend itself is fairly modern and, on paper, pleasantly boring:
Vue 3Pinia for client-side application statePinia Colada for server state and query lifecycle managementvite-plugin-pwa for manifest and service worker generationIndexedDB for persistenceThis is a very nice combination, by the way. (I guess I’m saying this to myself)
Vue 3 gives you the reactivity model and composition primitives that make this kind of work manageable. Pinia does a great job managing app-level state that actually belongs to the client, in order to share stuff between the different parts of the SPA (user details, preferences, etc)
But Pinia alone is not where I would want to manage remote data caching for a complex app.
That is where Pinia Colada really shines.
It gives you a chance to set proper query keys, stale times, retries, placeholder data, invalidation, and a much cleaner mental model for server state. That distinction matters a lot. Local state and server state are not the same thing, and pretending otherwise is one of the fastest ways to end up with a big mess.
We used Pinia for data that belongs to the user and the current state of the application. Theme preferences, language, some UI settings, selected organization context, and dashboard behavior all fit nicely there. That data was also persisted in IndexedDB, and the store merged saved values with defaults so it could tolerate schema evolution without drama.
We used Pinia Colada for API-backed data.
That meant dashboards, tables, detail panels, and all the stuff that should be considered a cached view of server truth, not a permanent client truth. Once you accept that distinction, a lot of design decisions get easier.
For example:
That last point is where things get interesting.
We added a custom persistence layer on top of Pinia Colada that periodically snapshots successful query results into IndexedDB. On startup, those snapshots are hydrated back into the query cache.
Not every query, obviously.
Authentication-related keys were denylisted on purpose. Persisting user profile or auth-sensitive state just because “offline first” sounds nice is how you turn a good idea into a security Fukushima.
So yes, the app can reopen and still show useful data offline. But it does so with guardrails. (Also, this should have been quite obvious, and if you thought otherwise, we cannot be friends)
The actual PWA setup used vite-plugin-pwa, with a standard manifest and Workbox under the hood.
The installability checklist was the usual one:
One small but important detail: the service worker was intentionally disabled in development.
That may sound odd at first, but it makes perfect sense, let me tell you why: the dev-mode stub service worker generated by the plugin can interfere with installability and muddy the behavior you are actually trying to test. If you want to validate the real install flow, you need a production build served in a production-like way. Pretending otherwise is just self-inflicted confusion. (And once again, we can’t be friends)
Another very deliberate choice was the caching strategy.
Static assets were precached, as expected. JavaScript, CSS, icons, fonts, the usual suspects.
API calls, on the other hand, were set to NetworkOnly.
I know. That sounds almost rebellious in a PWA article. Aren’t PWAs supposed to cache everything and save the world?
Wellll…. not necessarily.
In an authenticated, data-heavy application, aggressively caching API responses at the service worker layer can create more problems than it solves. You risk serving stale data that looks fresh, you complicate session handling, and you start debugging behavior across multiple cache layers while quietly losing your sanity.
So we kept the service worker focused on static assets and let application data be managed by the app itself through Pinia Colada and IndexedDB persistence.
That was, in my opinion, the right choice. I’ll also admit that it took me a sec to get to it, it seems obvious NOW, but we had a lot on our plate at the moment.
One of the biggest traps in this kind of work is assuming navigator.onLine tells you what you actually need to know.
It does not. Or at least, not reliably enough.
A browser can be technically online while your API is unreachable. Containerized local environments are especially good at making this obvious. Your machine still has a network interface, the browser feels optimistic, and your application is sitting there unable to talk to the backend. Great.
Also, note, the optimistic UI approach always sounds great until things break, and you have no idea how the app will actually react mid-fetch, for instance.
So instead of trusting a single browser flag, the app used a multi-source network status model.
It started from a pessimistic assumption: offline until proven otherwise.
Then it verified connectivity using three sources:
online and offline eventsThis worked much better than a naive online/offline toggle.
If the app successfully talked to the backend, it was online.
If requests failed at the network level, it was offline.
If the browser reported a connectivity event, that was treated as an extra signal, not the whole truth.
The health poller also used different intervals depending on state. Slower when online, more aggressive when offline so recovery could be detected quickly.
That sounds small, but it has a very real UX effect. If the app comes back online and the user is left staring at an outdated offline banner for half a minute, that “small” delay suddenly feels very personal. We’ll see later how we handled the UI part.
Let me dare to say something controversial: many PWAs care more about being cacheable than being usable.
That is quite backwards I think.
What users actually notice is not that a service worker exists, they neither know, nor care. They notice whether the app behaves honestly.
This goes a bit out of scope, but us developers have this iron conviction that the user will be in awe for things that honestly, don’t really matter at all for them.
So, in this implementation, offline mode was treated as a first-class UX state:
That last piece is very important.
When the app reconnects, you do not want the UI to sit there pretending everything is still current. The query cache should be nudged back into contact with reality, asap.
The reconnect flow did exactly that. A shared network-status composable exposed a reconnect callback mechanism, and Pinia Colada invalidated active queries as soon as the app was considered online again.
That made recovery feel alive instead of passive.
This is where many nice offline stories fall apart.
If your app is authenticated, you cannot just think about cached data. You also have to think about session validity!
What happens if the user goes offline while already logged in, spends hours browsing cached data, and then reconnects after the server-side session has expired?
Now we are talking.
The good news is that (obviously) the browser still knows how to send cookies. The less good news is that the backend will probably no longer accept them.
So the reconnect flow had to account for this:
401This was one of those moments where being conservative paid off.
Instead of trying to fake continuity with questionable assumptions, the app treated re-authentication as a normal recovery path. If the session was still valid, great. Fresh data loaded. If not, the user was redirected and brought back into a clean state. Remember, security first!
No heroic hacks. No spooky half-authenticated limbo. No “please refresh and hope for the best” user journey.
There was also a safer startup path for the case where the user opened the app while offline. In that situation, the app could not validate the session immediately, so it showed a dedicated offline bootstrap screen with a retry action rather than a blank shell or misleading content.
Again, not flashy, mostly just common sense.
A lot of the practical value came from IndexedDB.
It stored two very different categories of data:
PiniaPinia ColadaThose two stores had very different lifecycles, and treating them differently mattered.
User settings should survive deployments. Query cache often should not.
That is why version-aware cache reconciliation was added during app bootstrap. On startup, the app checked the current deployment version against a value stored locally. If the version changed, volatile caches were cleared before hydration.
That included:
User preferences were kept, safe and sound.
This avoided a very common kind of frontend weirdness where a new deployment lands with slightly different data assumptions, but the browser proudly revives old cached structures as if nothing happened. It is one of those bugs that makes you question your life choices because everything looks fine until one specific screen explodes for one specific user, and you barely have any control then.
A simple version marker might not be cool, but it gets the job done.
A few things genuinely paid off here.
This made the whole architecture easier to reason about. Pinia owned preferences. Pinia Colada owned fetchable data. Fewer mixed responsibilities, fewer accidental lies.
Precaching static assets was enough. API caching stayed in the app layer, where query invalidation, auth handling, TTL, and persistence rules were easier to control.
The toasts, banner, bootstrap screen, and reconnect behavior made the app feel intentional. Users do not care about your internal architecture. They care whether the interface makes sense when the network does not.
This made the app recover quickly without requiring manual reloads or random clicks to wake it back up.
Let us not romanticize this too much and turn this blogpost into a linkedin post: a setup like this also brings real complexity.
Even with conservative choices, you still have browser cache storage, IndexedDB, in-memory query cache, and application state to think about. If you are sloppy, stale data becomes a hobby.
Static assets were available offline. Previously fetched query data could be restored. But new API requests still needed the network. That is not failure. That is honesty.
The moment cookies, redirects, and session expiry are in play, your PWA architecture needs a lot more discipline. You are not building a notes app in a conference talk anymore.
Testing installability, service worker updates, and real offline behavior requires production-like conditions. That is a bit annoying, but also fair. The browser is under no obligation to validate our fantasies (lol).
Absolutely!
Not because the app became installable. Sure that’s cool, but honestly, it is the least interesting part.
It was worth it because the application became more resilient, more transparent, and more respectful of the user.
With Vue 3, Pinia, and Pinia Colada, we ended up with a setup where:
That, to me, is where PWAs start becoming genuinely interesting.
Not when they can be installed.
Not when Lighthouse gives you a gold star.
Not when a service worker exists somewhere in the background doing ninja service-worker things.
They become interesting when they keep a real app in control under slightly crazy conditions.
And if that sounds less glamorous than the average PWA demo, oh well.
Reality usually is. That is why it is worth talking about.