Responsive Design and the Modern Web Application

I recently gave a talk on some of the challenges a modern web application faces, and how we (as developers) sometimes make too many assumptions based on screen size.

It also covers some of the shortcomings in traditional RWD, and how my own pet project RWD2 can help remove some of those.

It’s from a local Meetup where I work, so the video is in Danish I’m afraid.


Responsive Web Design 2.0

Update: this project is now up on GitHub:

Here at Vertica A/S we’re kicking off the annual Innovation Camp for 2014. My entry was the proposal of RWD2.0 – which tries to improve on the concepts of RWD as we know it today. This is my pitch and I’ll keep you posted on how it evolves.

What’s wrong with traditional RWD?

We use CSS and Media Queries to scale, move and show/hide elements, and visually it gets the job done. But all the elements are still active, taking their toll on bandwidth/performance whether or not they’re actually needed. This flawed (now you see me, now you don’t) perception of reality mimics that of an ostrich trying to hide itself by putting its head in the ground.

What do we want to achieve?

We want to be able to build truly responsive user interfaces, without letting content and functionality targeting a specific screen size, affect the performance of another – no matter how rich it is.

What do we need to do?
We need to lay down the basis for creating intelligent self-contained components. They will be aware of all their own prerequisites, which most likeliy will be any mix of styling, templating, data and behaviour – ie. CSS, HTML, JSON and JS.

In addition of doing traditional RWD of scaling and moving these components, they can be bound to specific breakpoints, defining on which screen sizes they are needed and which ones they’re not. This is the key part in RWD2.0 as they will not linger and affect performance unless specifically a part of that user interface.

Client side logic
The client side logic is pretty straight forward, and is in the direction of proper Web Components, but without the need for encapsulation and cross domain sharing.

Basically for each component we need to define:

  • A JSON source (data)
  • Client side template and styling (HTML & CSS)
  • Functionality layer if needed (JS)
  • Configurable breakpoints (To match other RWD concepts in place)

AngularJS will give us almost all we need to contain these on a per element basis in a structured way.

Server side logic
For common data sources like catalogs, we are close to achieving this today. On most solutions we are already serving this as JSON from just a few service endpoints.

I guess the hardest part will be turning Umbraco into a JSON source, being able to serve both content and rich elements in the bits and peices that are needed, while maintaining a proper flow for content editors. If we focus backend work on honoring this, perhaps we could also get the benefit of turning the CMS part of our solutions into single page application – as we already do on most catalog browsing – sounds like a win-win.

But I would appreciate input on this from backend dudes so we can get closer to what needs to be done serverside.

Other considerations

RWD2.0 could result in a scenario of many requests per interface rendering, and although non-blocking could defeat the whole purpose. There will be extra credits for figuring out a way to pool all pending requests for a given interface into a single one mapped to a single server endpoint.

Responsive web design 2.0 diagram – Raising The Bar With AngularJS

I recently gave a talk on some of the techniques used in developing, how AngularJS helped os implement new ideas and extract a lot of complexity.

It’s from a local Meetup where I work, so the video is in Danish I’m afraid.


Web Components

This is still early stage, but if you have an hour or two, this shows you where web-app development is heading.

Walkthrough of the elements: – and slides.

Google devs (alpha) take on the concept with polyfills for almost all of the features: Just looking at the samples gives you a good idea of the benefits.

Actually Polymer.js lets you use the concepts today in ‘Evergreen’ browsers – but native support is when the benefits starts kicking in.

My advice would be to stay on the AngularJS path, it is in line with where we’re going!

Trolling commence :)

Update: Some more Web Components stuff from this years Google I/O has just been released.

More Eric Bidelman on the concept:

Guys from the Polymer team:

JavaScript Best Practices Podcast

A follow-up on the presentation I did for ANUG a few months ago. I’m being interviewed by Søren Spelling and we have a friendly talk about this and that in danish.

ANUGCast #157 JavaScript Best Practices part 1

ANUGCast #158 JavaScript Best Practices part 2

You can also find them in their iTunes feed:

Javascript Management & Best Practices

The other day I did a talk on structuring client code and getting more out of jQuery. Target audience was ANUG and getting these .NET’ers up to speed on the world of JS.

A good read for anyone into frontend development and there are loads of code samples and clever tricks ready to use in your next project. Also there are a few suggestions on how to get VS2010 up to speed when it comes to client side development.

You’ll find the HTML5 slides from the presentation here:

Javascript Management
(Chrome, Safari and FireFox only)

Deep Linking and Indexing AJAX Applications – Google, Hashbang and state maintenance

In AJAX applications user interaction is handled on the fly and content is generated and injected into the DOM. Today this is an important step in creating responsive UI’s and the benefits are obvious.

But since users are no loner browsing actual pages, you need to take extra steps to maintain state, handle url’s and serve indexable content to the crawlers. This post should give you a head start on your next fully fledged AJAX application.

State and bookmarkable url’s

Basically we wanna accomblish two things here:

  1. Be able to change the browser’s current url without causing roundtrips to the server.
  2. Route those url’s to JavaScript functionality and dynamic content.


In modern browsers history.pushState() gives you full control, allowing you to manage history and change urls strictly on the client – this can be any valid url within the current domain. The onpopstate event will then let you listen for url changes and map relevant content and functionality.

Usually though you need to support a few more browsers and are stuck with location.hash and the onhashchange event which has wider support. The concept here is to use the hashfragment of the document (which does not cause roundtrips) to emulate url structures and/or parameters.

This could look something like this:

Or this perhaps prettier one:

As long as it’s a valid url it can take whatever form you fancy, and if you include a plugin like Ben Almans jQuery Hashchange, this approach will have you going in IE6 and IE7 too.

From here on out, it’s about updating location.hash while listening for changes with onhashchange, then execute functionality accordingly. In other words you’re now linking to specific parts of the application and allowing users to bookmark relevant url’s.

Abstracting url handling

While very possible to do manually, this url-to-functionality mapping can get quite tedious. Fortunately there are quite a few libraries to help you abstract this part. Ben Almans extended BBQ plugin is taking the hashchange a step further, adding jQuery.param and jQuery.deparam methods to help querying url’s.

A few other examples: Basic, Advanced.

Also more elaborate frameworks like Backbone.js and the lighter Spine.js has Route modules for mapping functionality. These also give the advantage of supporting history.pushState() while falling back on location.hash in older browsers – sounds like a win-win.

Here’s an example of routes in Backbone.js

var Workspace = Backbone.Router.extend({
  routes: {
    "help":                 "help",    // #help
    "search/:query":        "search",  // #search/kiwis
    "search/:query/p:page": "search"   // #search/kiwis/p7
  help: function() {
  search: function(query, page) {


Indexing content with Googles Hashbang

Now that your about to turn your AJAX application up to 11, you need some way to dish out content to search engines to complete the scenario. If you’re able to go with the modern approach of changing proper url’s with pushState(), you just have to make sure that the server is able to render relevant content based on the same url’s – which might include some useragent detection skills.

When it comes to the hashfragment things get a bit more complicated, as it’s not a part of the communication with the server. Google is aware of that and offers a solution with the hashbang notation – ‘#!’. With this you’re letting Google know that this ‘ajax-url’ is indexable and that the server is able to render a snapshot of the html. The crawler will then make one additional request using the hashfragment as a parameter.

Using one of the previous examples, this is what happens:!/foo/bar/or/what/ever

Will result in this additional request:

The server then has to process this request and render a snapshot of the relevant content.

Basically thats it, you now have hashbang url’s in the Google index linking directly to your AJAX functionality.

Using redirects

You can use redirects to help with the server part, it can sometimes make things easier. As long as the crawler eventually ends up at a page, it’s perfectly safe to do redirections.

Lets say the crawler requests:

You could redirect to this relevant content:

Also if you’re using a framework like Backbone.js with pushState() for modern browser and location.hash as fallback for older ones, redirects will complete the cycle.

The modern browser gets a pushState() for AJAX functionality on:

The older browser gets a hashchange for AJAX functionality on:!/foo/bar

The server redirects the crawlers additional request from:

To this:

So 301’s or 302’s?

Using 301 ‘Moved permanently’ redirects, the target url will end up in the Google index, if you use 302 ‘Moved temporarily’ it will be the #! url.

Usually 302’s straight to the AJAX experience is the way to go, but remember that disabled users, or users with JavaScript turned off, could hit that url too.

Here’s the official info on Google’s AJAX crawling, there’s som e good info on creating HTML snapshots as well.

Frontend Development Feeds and Newsletters

Hey folks, here’s a list of feeds and newsletters you might find useful – I know I do. They’re mostly personal blogs by dedicated Javascript developers, and the newsletters have kept an excellent standard so far.
So in no particular order.



What’s the Business Value of a Rounded Corner?

This is a dive into the challenges of visual identity, browser compatibility and embracing the web. It’s up on the blog where i work, so sorry guys, this one is in Danish.

Website Performance

This is a presentation on website performance I did for a tech talk at work. Just to state some best practices and get everyone to think about it in our projects.

It’s barely scratching the surface on a big topic and if you wanna know more I suggest you read stuff from guys like Stoyan and Souders.