Taking the web offline – Service Worker (Death of the dinosaur)

If you are not interested in the full story watch the result here.

If I look at my work for parleys.com it seems like the circle is finally closing. When I started working for parleys.com back in 2007 one of my first tasks was to build a desktop application to download presentations so that our users where able to watch presentations on a plane, train or wherever connectivity was an issue. Even though this application was build from the same code base as the website (Flex/AIR) it were still two separate applications. It required us to provide downloads and the user to perform an installation. We needed to care about updating and about platform compatibility issues.

Still at that time it was a really nice use case and felt like „Yeah that`s it!“.

8 years later (8 years!!!) things have changed. Who is downloading applications from the internet anymore? Yeah it still exists but it feels somehow clumsy. The good news, today in 2015 we have all the tools to make these kind of „desktop“ apps obsolete.

Let`briefly look what we need to make a web application run offline and what options we have today.

I would divide the requirements into two groups. First we need a way to make our application shell – so the code that we need to actually run our application available offline. I mean things like static assets, your index.html, js files, images…  And second we need the possibility to store the content data the user needs inside of the application (in our case video, slides and course assets).

Let`s first check the options for storing the data:

Local Storage

Local storage is really only meant as a cookie replacement. It is easy to use but also the options are limited but the biggest problem which ruled it out for our needs is the maximum storage capacity of only around 3-10Mb. Not enough to store videos which combined might be several gigabytes.

IndexedDB

Indexed DB fits ours needs quite well as it allows you to store large data amounts (using quota management and user permissions) and can also store blobs. It is also quite good supported among the ever green browsers. While in theory you can store large data in practice there are known issues when you want to store large blobs in your db. Basically it`s not really possible to append data to an existing blob without reading the existing chunk into memory. So your only option is to have the whole file in memory and save it as one piece. Something which you do not want to do with files several hundred megabytes or even gigabytes large.

Filesystem/File API

So here comes the Filesystem API. The filesystem api is not a W3C standard by now but just a proposal from Google. I have not followed it closely but i have the feeling that it ,might never become a standard. https://hacks.mozilla.org/2012/07/why-no-filesystem-api-in-firefox/comment-page-1/

That`s really unfortunate because it basically solves all our problems with the other apis. As the name implies you have read, write and create access to a sandboxed part of the local filesystem which makes it a perfect fit, if we want to store files. Used with the FileReader API you get random access to files so you can easily manipulate them and append data efficiently. So even with it not being a standard yet, i have decided to go with this option for now. Yes, it is chrome only right now and yes, it is not a standard but you can see it as a progressive enhancement technique. Users on chrome will benefit from the extra functionality and the others can still enjoy the web of yesterday. Also the way I implemented it makes it easy to switch later the filesystem API with Indexed DB (once it deals better with large files) so eventually it will work in more and more browsers.

No let`s examine the options we have for storing our application shell.

Application Cache

On paper application cache gives you exactly what we need and it is around for quite a while. You can define assets in a manifest and those will be cached by the browser but there are many challenges with app cache. It is difficult to manage which assets to cache and when the cache should update. Check out this article for more info.
We actually had a version of parleys for offline use in beta stadium but never released it. I just never felt really confident with the app cache solution we had. You can check out the Devoxx 2012 keynote where we demoed this app if you`re interested in history.

Service Worker + Cache API

What sparked my interest in this web app/offline topic was the introduction of the Service Worker API last year. Service Worker gives you really fine control over network requests. You are able to intercept and transform request all with a pretty simple API. Together with the Cache API you really have full fine grained control over which assets you want to store or what alternative content you want to serve in case e.g. the network is down. There are many approaches how to use Service Workers and Jake Archibald (the man behind the API himself) has a nice collection of usage patterns. The way I am using the SW API for parleys.com really does not do it justice as i am only using it to cache the app shell, but you can go further and improve overall caching for all kind of assets, also content assets and provide fallbacks and improve the overall responsiveness of your application.

Usage on Parleys.com

For parleys.com I have used the Filesystem API to build a full fledge download manager. Files are downloaded in chunks and when a download is interrupted you can always pick it up again when you have connectivity again without restarting the download from scratch.

The Service Worker API is used to serve the application shell so once the app is cached you can shut down wifi go to parleys.com and it still works – it is really a groundbreaking change in my opinion. Well so groundbreaking that it is also kind of an issue. Who knows this? The user is not used to this functionality and if he is sitting in a plane without connectivity he might just not even try to load a web page as he does not expect it to be working. This is really something new where new patterns need to be established. Not yet sure how this can be approached, but in my opinion some thought needs to be put into this. Any suggestions?

After this long introduction let`s have a look at the result:

Issues

Because we were a bit afraid that user behavior and expectation (or lack of expectation) might actually be a problem and because of the chrome only „issue“ we also still investigated other solutions and build a version which can be downloaded as a desktop client. It uses the awesome Electron project which is, like node webkit, based on chromium.

The nice thing is that with the download manager in place without changing one line of code (yeah ok it were 1 or 2) and with the help of this great build packager we have alternative desktop apps (mac/win).

Still I am happy that we will not use them and go with the web version instead. I think this new technology can only be pushed if we use and provide it. Then in 2 years offline web apps will be the norm, maybe like responsive design is today.

Related links:

Parleys Desktop Prototype 2013: https://www.youtube.com/watch?v=RtypJAykq74 (Node-Webkit based)

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Pin on Pinterest

2 thoughts on “Taking the web offline – Service Worker (Death of the dinosaur)

  1. Very nice, glad to see the functionality back!

    Is there a chance to put the download link on the talk page itself? This would allow quickly downloading all those “watch later” videos in my list. :-)

    Thumbs up, thank you!

  2. Yeah definitely it`s only a first test if people like it and will be improved so your feedback is really welcome!

Leave a Reply

Your email address will not be published. Required fields are marked *