Privacy Policy and Cookies

By continuing to use our site, you agree to our Privacy Policy and our use of cookies to understand how you use our site, and to improve your experience. Learn More.
I Agree.

Almost all advice about web application optimization is catastrophically wrong

Last modified date

web application optimization

Most of the advice you run into regarding web application optimization is not only wrong, it’s what I call catastrophically wrong, meaning that it:

  • leads you to waste time on optimizations that have negligible impact on actual performance
  • leads to making some decisions that actually worsen performance
  • completely disregards the performance optimizations that actually do matter

That’s a big claim, and I will back it up.

So here it is:

Performance, for a web application, means efficient use of scarce resources over the lifetime of a typical user session.

It’s that second half – which I’ve italicized – which is why most of the advice out there is misguided.

Web Sites vs Web Applications

To see why the standard advice is misguided, we need to think about what the term web application means as compared to web site.  We can consider a continuum from websites to web applications, which would look like this:


Example: blog

  • Most people visit once, never return.
  • Typical session length: single request.
  • No interactivity beyond pop-up menus, cookie confirmations, trivial Contact Us / Register forms.
Hybrid Site/Application

Example: online banking

  • Most people visit 2-3 times/month.
  • Typical session length: 5-20 minutes.
  • Has some interactive areas (view / filter recent transactions, loan applications).
  • Interactive areas are embedded within significant static content (credit card offers, loan information, etc).
Web Application

Example: insurance claims processing

  • Most users use it all day, may never reload (sessions are renewed without a reload, “relogin”).
  • Typical session length: 1 hour – 1 month.
  • Complex interactive views: large data sets, search/sort/group/pivot, inline grid editing, complex forms with interdependent field validation rules.

Not everything fits neatly onto this continuum of course, but it’s a useful analysis tool.  When you consider the right end of the spectrum, you may already be realizing what is wrong about the most common advice.

Let’s take the example of a category of web application we’ve built, multiple times, for some of the world’s biggest insurance carriers: insurance claims processing.  This is an app that users continuously interact with, all day every day, with multiple tabs open processing different claims concurrently, lots of complex data validation rules, and lots of different types of data streams being integrated.

This application basically never reloads.  I mentioned session lengths up to “1 month” above, but there’s really no upper limit – a number of times, we’ve seen pages older than a month.  The only real limit to the page lifetime is upgrades; when a new version of the app is rolled out, users finally do need to reload.  More often, the page lifetime is limited by operating system reboots!

In this scenario, you may be wondering about session expiration – doesn’t that require a page reload?  No, it does not: when a user goes to lunch or to a meeting, they come back to a dialog telling them their session is expired, and to please log in.  The “relogin” is accomplished, and the page does not reload.

When an application basically never reloads, the approach to web application optimization is necessarily completely different.  In particular, all of the standard concerns are either gone or recast in a different light:

  • concerns about “lightweight components” that deliver basic functionality in the minimum number of bytes: irrelevant
  • concerns about carefully trimming unused functionality: just give me the kitchen sink and don’t bother me again!
  • waterfall analysis of initial load: needless beyond determining that everything cacheable is cached, for next month’s reload
  • media loading and debates about image codecs, SVG vs webfonts and all that: it matters to support formats maximally, but trimming media bytes will have no measurable impact on performance

The usual concerns are out the window!  But is this kind of application actually common?

Examples of real Web Applications (it’s not an edge case)

You may now be thinking, OK, you’ve got a rare, edge-case application where web application optimization is very different, but it’s just a single edge case.

Not so.  While applications like these are indeed 1000x less common than ordinary web sites, they are not rare at all.  Some others:

  • drug discovery dashboards: take a drug candidate, look at all the in vivo in vitro assays done on it (hepatotoxicity, pharmacokinetics, etc), compare with in silico / deep learning models, figure out the next drug candidate to synthesize and repeat
  • securities analysis: combine past performance, fundamentals, predictions from many sources and your own proprietary models, along with risk models and your firm’s particular exposure, then act: buy/sell/trade/hedge
  • logistics / fleet management / warehousing: track all of your vehicles (land, water, air), their contents, and how they compare to projections of demand, then model possible changes and execute them
  • chip manufacturing: rapidly explore huge datasets to find the sweet spot of performance vs RF interference vs printed circuit size vs reliability and other factors.  It’s a complex process, especially for communications chips and other mixed analog/digital devices
  • real-time trading: “christmas tree” interfaces continuously receiving market data and doing real-time comparisons with predictive models to guide the next trades
  • defense: too many to list.  This category overlaps with logistics of course, but also, deep bidding systems for prime contractors, management of drone / satellite media, many other things
  • network management / virtualization: live views of networks and their status, multiple performance metrics, ability to set complex alerts and automate corrective actions

I could go on at some length, but, you get the point: these web applications are surely 1000x less common than websites, but they are also critically important – this is the software that keeps the complex infrastructure of the modern world humming along.  

Given the importance of these applications, we need a practice of web application optimization that works well for such apps.  And this is what my company, Isomorphic, has focused on for 20 years.

Web Application Optimization: what are we optimizing?

cogs representing cohesion in web application optimization

Returning to our definition of performance:

Performance, for a web application, means efficient use of scarce resources over the lifetime of a typical user session.

What does web application optimization mean when the session length is as much as one month?  The focus completely shifts: now, we are trying to minimize use of resources after the page has loaded.

But what specific types of requests need to be avoided?  We can break them down by type, in increasing order of cost:

  • Type 1 or “static” requests: static, possibly cacheable resource

It doesn’t matter if you load something like an icon after page load.  It’s cacheable, and unless you’ve botched your caching setup, the resource will never be loaded again (even on the full page reload coming up next month).  

Likewise things like a user’s profile picture, or a definition from wikipedia – these are requests where it’s well understood how to scale them basically infinitely.  

This is not a “scarce resource”.  When it comes to web application optimization, such requests can be basically ignored.

  • Type 2 or “app tier” requests: requests that require authentication, touch session data, or in some other way force the application tier to do work

Scaling authentication systems is well-understood, but not as trivial as scaling requests for unauthenticated static content.

Further, any request that touches session data causes an access to a data store (db or memory) that is synced across all servers in the application tier.  So here, we’re starting to see requests that matter.

However, the defining trait of a “Type 2” request is that it really only touches data relevant to one user (for example, session data, or data about preferences).  Such requests can be horizontally scaled: just add more app tier servers, more replicated or segmented databases, use geo-dns, etc.

  • Type 3 or “shared DB” requests: access a central, read/write store (basically a database)

This is where to focus.  When approaching web application optimization, if you haven’t completely bungled the architecture, and you’ve already solved the easy stuff, the final bottleneck is always, always some kind of shared read/write data store.

To be very clear: we are talking about shared storage that is frequently accessed and frequently written to, and also where it’s important everyone is seeing the latest data.  Data stores that are mostly read-only are easy enough to scale, as are data stores that can be written to, but where it’s not very important that people are seeing the very latest (think: social networks).  In all of the types of applications I’ve mentioned above, we are looking at frequent read/write access, and synchronization is important (a securities trader cannot be looking at stale data!).

You can now see how different this is.  Classic optimization techniques pretty much focus on Type 1 requests.  In web application optimization, those are all but ignored – they don’t matter!

The Lost Art of Web Application Optimization

clockwork mechanism representing web application optimization

If you go look up articles on web application optimization, the techniques needed to reduce and eliminate the important requests are almost never talked about.  This is terrible, because, there are a lot of well-meaning developers out there who believe they’ve done all the right things to optimize their web applications, and in fact, they haven’t even started on the approaches that would actually make a difference.

I cannot tell you how many times we have seen a team that went through a checklist of optimizations, asserted that they had done everything they possibly could and still had a slow app, and within hours my team had found a simple optimization that would make a 2x, 5x, even 100x difference.

Please don’t misunderstand: it’s not that these developers are dumb.  The problem is that there is so much momentum around approaching web application optimization in a specific (mostly useless) way.  These developers were dutifully following best practices.  If they saw a “data request”, the reaction was “well, can’t do anything about that – that’s for the DBA to think about”.  Yet in fact, the best way to optimize a “data request” is to never send it at all.

So.  Let’s get into the details of what the right web application optimizations look like.  If you are a developer, I promise you that this will be an eye-opener for you, and may just make you the hero who identifies the key optimization that saves your project.

Where and how do these techniques apply?

To again frame what we’re talking about – web application optimization, not web site optimization – let’s return to our continuum, and understand which specific projects we’re talking about here.  On this version of the continuum, I’ve shown where Isomorphic’s technology (SmartClient) fits, that is, the specific types of applications we address:


Example: blog

  • Most people visit once, never return.
  • Typical session length: single request.
  • No interactivity beyond pop-up menus, cookie confirmations, trivial Contact Us / Register forms.
Hybrid Site/Application

Example: online banking

  • Most people visit 2-3 times/month.
  • Typical session length: 5-20 minutes.
  • Has some interactive areas (view / filter recent transactions, loan applications).
  • Interactive areas are embedded within significant static content (credit card offers, loan information, etc).
Web Application

Examples: insurance claims processing

  • Most users use it all day, may never reload (sessions are renewed without a reload, “relogin”).
  • Typical session length: 1 hour – 1 month.
  • Complex interactive views: large data sets, search/sort/group/pivot, inline grid editing, complex forms with interdependent field validation rules.

I’m going to use SmartClient examples to explain the general principles of web application optimization, but to be clear: you don’t have to use SmartClient to achieve these optimizations.  Certainly, it’s a lot easier that way, but, if you have existing applications written in some other technology, or you are in a totally different context (maybe not even the web?) all of these principles can be applied with any technology.

Also, not only do these techniques apply to any UI accessing a remote data service, some of these techniques apply to “headless” (no UI) situations.  We have had a couple of funky projects where we have needed to take our JavaScript web components and run them inside of a server (e.g. a Node environment).  Our intelligent caching and server offloading technology worked just as well in this situation as it does when it runs inside a browser, and we achieved a dramatic reduction in expensive server-to-server requests.

The Counter-Intuitive Art of Optimization

Optimization is part art, part science, part voodoo.  Nothing does a better job of separating merely average talent from superb talent.

If you want to reach to the point of being “superb” talent as far as optimization, you need to keep 3 things in mind:

1. Measure twice, cut once

This is a saying from carpentry & construction, meaning: double check your measurements before you cut your length of wood / steel / pipe, or you’re probably going to have to redo parts of your work later.

In optimization, it means: never attempt an optimization until you have confirmed what is actually the slow part.

Many times, I have seen teams conclude that a particular part of the system “must be” the slow part, and then go off and spend days optimizing that piece, and the result was: zero.  No impact on performance at all.

How did that happen?  Because when they decided on what “must be” slow, they relied on conjecture and anecdotes.  They never measured.

There is a social aspect to this problem too: developers get very excited about optimization.  Also, many developers are very sure of themselves.  So, if you are the voice in the room saying “hang on, are we sure this is the problem?”, well, that’s a very unpopular position.  When advocating for measurement, I have been shouted down more than once.  In each case where I was shouted down however, I was proven right.

Part of the art of optimization is navigating this social situation.  One great trick is to say “OK, let’s start on the optimization, but can we also have a different person work on some measurements?”.  Usually, the measurement work is completed before the optimization cowboys have done any real damage, and a course correction is applied.

2. Reduced coding effort is an optimization 

I regard reduced coding effort as a web application optimization in its own right.  The reason is simple: if it takes less time to build a given screen or function, that leaves more time to consider performance and other aspects (such as deeper testing).

For this reason, in the points below, reduced coding effort is repeatedly mentioned, and should be taken as another form of web application optimization – because it is.

3. Every eliminated request counts, and the last few you eliminate count even more

As we discuss approaches to eliminate requests, each of these optimizations may seem like, taken in isolation, it may not make a huge difference.  But when you apply all of these optimizations, and do so correctly, it is literally routine to see performance improvements of 30x or more.

There’s a slightly counter-intuitive reason for this.  Say you implement some of these techniques and you eliminate 80% of the most expensive requests.  That’s great, the app is now 5x faster!  But let’s say you find a way to eliminate another 10% of the requests.  You’ve now eliminated 90% of the original apps requests, so, the app speed just doubled; you now have a 10x net improvement over the original app.  Eliminate another 5%?  Doubled again, now a 20x improvement over the original.

The number of requests you were eliminating was rapidly dropping, so it may seem like productivity is going down.  In reality, performance was going exponential.

This is why it’s routine for us to see 30x plus improvements.  And this is why it’s so important to look closely at every request and analyze whether it can be avoided: each category of request that you can get rid of potentially makes a dramatic difference in performance.

What you can expect to achieve: stories from the field

Because we have rolled out SmartClient-based replacements for existing applications scores of times, we’ve seen the effects on performance & productivity first hand, and sometimes this creates an entertaining story:

1. shortly after we upgraded an existing app with SmartClient components, a network admin filed and escalated a trouble ticket, because load on both the network and the database had dropped so dramatically that he assumed something must be wrong.  Nothing was wrong of course; the end users were actually getting work done far more quickly.

2. I was half-jokingly chastised by one of the end users of an application we’d replaced.  He had gotten used to going for coffee or chatting with colleagues during certain extremely slow operations in the old app.  Now he couldn’t do that, and it was our fault!

3. staff from another department came by asking why their non-SmartClient applications were running so much faster lately, asking if someone had upgraded the DB?  This person was actually somewhat annoyed, because she had been told for months that it was too expensive to upgrade the DB, and she thought some other department had pushed through a DB upgrade, which felt very unfair given the number of times her team had complained.

The actual cause was that our SmartClient-based replacement application was no longer hammering the database that was shared with her department’s applications.  It was such a dramatic improvement that even the non-SmartClient applications were now running noticeably faster.

We went on to replace an application for her team, too.

4. our replacement app was so much faster that it obsoleted someone’s pet project, which they had been relying on for a promotion.  We gained a political enemy that day, and this is literally still a problem for us at a customer of ours.

I could go on at length, but, in short, when it comes to web application optimization, these are the techniques that actually matter.  These are battle-tested, extremely effective approaches for building web applications that are extremely efficient, and also, a joy to use (because they are so responsive!).

Web Application Optimization techniques

web application optimization represented as a keyboard

Now that we have defined performance, and defined the target of web application optimization, let’s get to the main point: how to do it.

We’ll break this up into a few categories of optimization, starting simple in each category and building on our knowledge to get to the truly sophisticated stuff.

Here we go!

DataGrids, Trees, ComboBoxes: intelligent caching and intelligent use of client-side operations

DataGrids are at the core of most enterprise applications, and they are the components that most obviously make the most expensive requests, so we’ll start here.

Note that, when discussing optimizations in this area, I mostly just refer to “grids”.  However, these optimizations are far broader than what is normally thought of as a “grid”.  Specifically, many components that are not called “grids” really are grids.  For example:

  • a tree is just another type of grid (this is especially obvious if it’s a multi-column tree)
  • a combobox, select item or other “drop-down” is just a grid attached to an input field (again, more obvious if the drop-down is multi-column)
  • a menu is just a grid.  This is especially obvious if it’s a data-driven menu, where some or all of the menu items come dynamically from stored data
  • a tiled view is another type of grid.  Especially clear if there’s a searching and sorting interface
  • a repeating form (multiple identical forms for editing similar records) is another grid

In general, any repeating UI that is attached to a set of similar records is .. you guessed it .. a grid.  For simplicity, in this article, I will usually just refer to “grids”, however, bear in mind, I mean all of the above cases, and any other cases of repeating UI attached to a list of data.  As far as web application optimization, the techniques and concerns are basically the same.

1. “Adaptive” Filter & Sort

Among the grids that are capable of data paging at all, most operate in one of two modes: either you load all the data up front and the component can do in-browser searches and sorts, or you use data paging and the server is expected to do all of the work.

Our SmartClient components implement something better, which we call Adaptive Filtering and Adaptive Sorting.  It means that, if the data set happens to be small enough to be completely loaded, our components automatically & transparently switch over to using local filtering and local sorting, then automatically & transparently switch back to server-based filtering and sorting as needed.

This is not easy to get right – consider a user typing into a dynamic search box: after typing 3 letters (“box”) there are too many matches, but after a 4th letter (“boxe”) now all matches are loaded.  If the user types a 5th and 6th letter (“boxed “), you do the filtering locally.  But if they backspace over the last three letters of their search string, you need to go back to the server.  You also need to go back to the server if they change some other criterion in such a way that the overall criteria is no longer more restrictive than it was at the moment you acquired a complete cache of matching records.

It’s a complicated feature, and we have dozens of automated tests ensuring that it is exactly right in all cases, but, wow, this feature is worth it.

There are two reasons this is so valuable:

  • while data paging is absolutely required for the reasons explained above, most of the time, end users are working with much smaller data sets.  I have seen just this feature alone eliminate 90% of the requests that were originating from a given screen; I don’t think I’ve ever seen a searchable grid where the impact was less than a 50% reduction in requests.
  • small numbers of matching records do not mean that the DB did less work.  That’s so important it needs repeating: small numbers of matching records do not mean that the DB did less work.  In order to return your 50 matching records, the DB might have consulted thousands, tens of thousands or millions of records.  It’s counterintuitive, but it is more often the smaller result sets that correlate with the DB doing lots of hard work.  For this reason, eliminating requests that further refine result sets – say reducing from 75 down to 10 – has an enormous impact on database load.

For example, if your end user does a query that returns 50 matching records out of several million, and then they hit sort – right there, that’s a 50% reduction in DB load if that sort is performed locally due to Adaptive Sort.

Similarly, if they typed in a search string, got 50 matching records, and then typed another couple of letters: that’s a tripling of application speed if those last two letters did not result in server requests.

Furthermore, in each of these cases, the end user didn’t have to wait.  The local filter or sort returned essentially instantaneously, which is a huge boost to productivity.

To be frank, even though I was the architect of this feature, I didn’t realize the impact that it would have.  I had to see the before & after Dynatrace output of DB load before I realized that Adaptive Filtering & Sort was preferentially eliminating the most expensive requests.

Arguably the best thing about this feature is that it’s just on by default in SmartClient.  You don’t have to do anything – it just works.  When we rolled it out, all of our customers’ apps were upgraded at once.

2. Use incremental rendering & data paging in grids, trees & drop-downs, always, and especially during development

Incremental rendering means rendering only part of the dataset – just rows that have been loaded, for example – and rendering the rest on demand (while scrolling, for example).

Most “lightweight” components tout the fact that they can render 2000 records worth of data very quickly – just blot out a giant HTML table for all the data, all at once, then you can scroll around freely, since it’s all rendered in advance.  Some even offer client-side filtering and sort, again, only if you load all data in advance.

This is worse than useless.

The fact that features vanish once you hit a certain data threshold means you are encouraged to try to load enough data to stay under that threshold – perhaps thousands of rows – even though the end user is likely only viewing the first 30 or so before they either find what they need or narrow the search.

You end up burying the DB to try to compensate for these components’ limitations.

What is particularly problematic here is that these problems are often discovered late in the development cycle: the developers have been working with small sample datasets the whole time, and no one has looked at what happens with real data volume.  This can become a catastrophe in multiple ways:

  • demos have been given showing features that aren’t actually available for larger datasets, and now that those features can’t be used, there are gaps in the product’s capabilities
  • UI logic has been written that inadvertently relied on all records being loaded.  When the system is switched over to use data paging, suddenly there are several bugs.  New data services need to be added to handle cases that were previously handled in-browser, also the way that selected records are tracked has to be reworked, and so on.
  • developers scramble to add server-side filtering, sorting and data paging features, which is non-trivial (hint: if you expect your ORM system “just handles” data paging, it won’t, at least, not efficiently, not for non-trivial queries)

I’ve seen this particular type of disaster happen so many times, that I can tell you what happens next: somebody suggests the idea of switching grids into “local mode” only when the data is small enough.  This almost works, only:

  • the end users are baffled by grid features that are sometimes there and sometimes not
  • client-side vs server-side filtering and sorting are similar, but not exactly the same, so end users try to share instructions or even shared searches, but those don’t work for other users
  • there are weird edge cases like: if you remove some filter criteria, the grid needs to switch from “local mode” to “server mode”, so the grid needs to be destroyed and re-created, which then loses session information like scroll position, unsaved edits, list of selected records..

In the end, the temptation is just too strong: the developers say: if we just load all the rows every time, we don’t have to deal with all these issues, and we can ship now and somehow fix this in 2.0.

So the application ships that way, and the performance is awful, and usually never really fixed.

So, when considering UI components like grids, trees and drop-downs:

  • the UI components should treat data paging and incremental rendering as the default.  Having all data locally should be a special case, not the norm.  It’s a big red flag if a component treats “all data is local” as the default case
  • completely ignore any demos that use local data, and only consider those features to be “real” if you see a demo that shows them working with data paging
  • make sure your developers are working with large sample data sets and have data paging enabled in development.  If you don’t do this, there will be bugs found late in the development cycle

In sum: treat data paging as a baseline requirement, and select technology accordingly, or you will end up shipping something that is crippled or slow or both at once.

3. Automatic Cache Updates

It’s a very common experience: you select a record from a grid, edit it in a form, save it and return to the grid, and you see the grid reload, because you just changed a record.

It would be far more efficient to just update the grid’s cache in place, but, it’s not necessarily that easy: what if the record has changed such that it should no longer be visible in that grid, because it doesn’t match the criteria for that grid anymore?  What if its sort position changed?

Not only is that opportunity to eliminate a request missed, but, often, because data might be stale, other components within the same application perform additional unnecessary requests.  For example, after finishing editing in the grid, the end user might return to another screen where the same records from the grid were shown in a comboBox.  If those, too, are stale, that is reported as a bug, and the typical developer just forces the comboBox to do a fresh fetch each time.

The result is dozens or even hundreds of unnecessary “cache refresh” requests over a typical session.

The solution is easy enough to state:

  • all components that can fetch data have a notion of the “type” of record they are dealing with – they are connected to a central “DataSource” for those records
  • when changes are made, the DataSource broadcasts information about the change
  • all components that have caches know how to update them in place – they know what criteria & sorting rules were used to load their data, and they can apply those criteria & sorting rules to the updated record to see if it should remain in cache, or shift position
  • critically, the criteria and sort applied on the client works exactly the same as if the same criteria and sort had been used in a server fetch.  Otherwise the cache update may fail, leaving records around that should be eliminated, or eliminating records that the user would still expect to see

This is an approach that is both a direct optimization and indirectly optimizes the application by reducing coding effort, leaving more time for performance analysis.

Once you’ve got this system in place, you basically no longer have to think about possibilities for stale data – it’s just handled for you.

4. Advanced Criteria: a Cross-Cutting Concern

Once you have Adaptive Filtering and Automatic Cache Updates, you need your client-side filtering system to be very, very powerful and flexible.  Why?  Because, if the client system can’t closely match server filtering, you have to turn client filtering off, and only rely on server filtering, which means a lot more Type III (database) requests.

For this reason, SmartClient supports arbitrarily deeply nested criteria (as many ands and ors as you like) and a full range of search operators (the usuals like greater than but also things like relative date ranges (with last six months, for example) and the equivalent of SQL “LIKE” patterns).  The set of operators is also customizable & extensible, so that you can deal with quirky server filtering and still have client filtering match.

Also important are rules like: client-side filtering is impossible for this particular field, so if criteria changes for this field, we need to ask the server to do filtering.  But for any other criteria changes, we can do it locally and offload the server.

Advanced criteria support is also critical for cache updates.  To incrementally update a client-side cache, you have to be able to know whether the newly added or newly updated record matches the criteria applied to the overall dataset.  If it does, you insert it into the cache, if it doesn’t, you drop it.  To make this decision, you will again need a client-side criteria system that is an exact match for the server’s criteria system; otherwise, you have to drop the entire cache and reload (a very expensive Type III request).  

To achieve these and other web application optimizations, you need a deep, sophisticated client-side filtering system, which covers a broad range of operators and also allows arbitrarily nested criteria.

Validation: the unsung hero of optimization

validation providing a key area of web application optimization

Validation may seem a strange place to go looking for optimizations.  It’s just checking formats and ranges, right?

The fact is, the more complex your web application becomes, the more you run into validation rules that result in Type II or Type III requests: many validators require checking that related records exist, or that related records are in a specific state, or that a field value is unique amongst all records.

When these validations are performed spuriously or redundantly, they can absolutely hammer your server.

When you catch validation errors early, in the browser, you can avoid server requests.  The more sophisticated your validation system becomes, the more requests can be eliminated.

1. Single-Source Validation: consistent, declarative validation across client & server

Most application developers only write the server validation logic, because that’s all that is required for application correctness.  

Further, there are often separate server and client teams, and client vs server validators are frequently written in different programming languages, so coordinating to make client validation perfectly match server validation can be difficult.

Because of all of this, often, client-side validators simply aren’t written at all, which leaves a big opportunity for web application optimization untapped – client validation can eliminate thousands of expensive Type II & Type III requests within a single user’s all-day session.

The solution is simple: single-source declarative validation.  This means that you declare your validation rules in a format that is accessible to both server and client logic, and the same rule is executed in both contexts, automatically.

To do this, you need a system that spans client and server.  There is a “received wisdom” in the industry that you can just pick your client-side UI library and your server-side libraries separately, and they just connect via REST, and you’re all set.  When it comes to web application optimization, this is extremely naive: most major web application optimizations involve cross-cutting concerns, where both the client and server teams need to be involved.

Note that I am not asserting that you absolutely must use our SmartClient server and client technologies – like everything discussed here, these are architectural optimizations that can be achieved with any technology, given enough time.  In particular, SmartClient’s validator declarations can be expressed in JSON, so, any server-side system could generate the client-side definitions based on some proprietary server-side definitions, and then SmartClient’s client-side system could read those.  The same is true with using SmartClient’s server framework with another UI technology.

As an example of mixing and matching, we had a customer that was building a special embedded server for miniaturized network hardware (think mesh networks).  For speed and compactness, the server was implemented in raw C (not even C++).  They were able to deliver an extremely rich SmartClient-based UI for this tiny device.  For part of it, they had a system that could output either C code or SmartClient validator definitions from validators declared in XML Schema.  It worked beautifully, and the device and its web UI were extremely rich and efficient.

2. Rich Validator Library with Conditional Validators

Having a single-source declaration of validation rules is key, but, if the built-in declarative validators only cover scenarios like “number must be greater than 5” you are still going to have a lot of trips to the server for more complicated rules.

Basically, the richer your library of declarative validators, the more likely it is that a given validation scenario in an application has a declarative solution with both client- and server-side enforcement, and so the more likely you can have client-side validation logic that avoids Type II & Type III requests.

For this reason, SmartClient’s validation library is extremely rich, and further, supports conditional validation.  That is, you can say that a given validation rule only applies when a certain condition is true.

For example, you can easily express things like: ship date cannot be changed if the Order.status is already shipped.

This again relies on advanced criteria, and on those advanced criteria being in a standard format that both the client- and server-side systems can execute.  Once you have that, you can express powerful validation rules in a very elegant, declarative fashion (JSON or XML).  For example:

<field name=”quantity”>
    <editWhen fieldName=”Order.status” operator=”notEquals” value=”Shipped”/>

A rich validation system is another example of something that is both a direct optimization (less server trips due to errors caught by browser logic) and an indirect optimization, in that it saves a lot of coding effort.

Further, note that just because you can declare validation rules via criteria, that doesn’t mean that you are limited to expressing validation rules via criteria.  You aren’t.  

In fact, one of the most powerful optimization techniques is split validation: if you have a complicated rule, and the whole thing can’t be expressed as criteria, express it partially in criteria, then express the rest of the rule with custom server logic.  

The part that is expressed as criteria will be understood by the browser-side system, and will stop unnecessary requests.  Then, on the server, the full rule is applied, ensuring enforcement.  Here again, I have seen split validation reduce server load so drastically that even end users noticed the effect.

3. Cache-aware validators

A “cache-aware validator” is a validator that is aware of client-side caches and can inspect them in order to potentially avoid expensive server requests.

The best example here is the common “isUnique” validator, which, at the simplest level, can check whether a field value is unique among records of a given type.

This humble-seeming validator is used all the time to check for collisions when users are naming things: projects, cases, estimates, articles, whatever.  It’s also used for data consistency purposes, to detect duplicate customers, suppliers, partners, etc.

To illustrate the importance of this, in one large banking customer of ours, years ago, DB performance profiling revealed that 3 different queries related to uniqueness were actually a huge proportion of DB load – more than 30%.

We realized there was an opportunity to make the validator smarter, if it was cache aware.  When you are dealing with large datasets which are only partially loaded, you can’t do an “isUnique” check purely on the client, because there might be a collision in data that isn’t loaded.  But this doesn’t stop you from implementing this critical web application optimization:

  1. if there is a collision in local cache, signal an immediate client-side failure (server is never contacted)
  2. If the local cache happens to be both entirely complete and quite fresh, the validator passes client-side, allowing other validations to be done (which may also prevent an unnecessary server trip, if they fail in-browser) to proceed.

Seemingly simple, but extremely powerful.  In the particular app we were looking at, we found that this optimization produced a ~70% reduction in server-side “isUnique” checks, which was a huge boost to overall performance.

We rolled this improvement into SmartClient as a default behavior for the built-in “isUnique” validator, and now, all of our customers get that benefit.  We’ve added other “cache-aware” validators as well – it’s a subtle but enormous web application optimization.

4. Standard Validation Protocol – for any type of Object

To avoid unnecessary server trips for validation, you need a validation system that will do smart things like checking all client-side conditions, including validations that are sometimes resolvable client-side but sometimes server-side, before attempting a server request.  And, that validation system must be able to request that multiple fields at once are validated on the server, in a single request.  This implies a whole standardized protocol for contacting the server, conveying data, returning errors, etc.

Still think you can pick your client and server technologies separately, and just have the React UI guys talk to the Spring Boot server guys and everything will be great?  Obviously not.  This is a huge area of web application optimization, which can make a night-and-day difference in your application’s performance, scalability, responsiveness and end-user productivity.

The only way to achieve these benefits is a coordinated dialog between client and server teams.  Built correctly, a true client & server validation system means that all of the optimizations above, which might seem difficult to apply in an individual screen, instead just happen, without any extra effort on an individual screen.  

This is another instance of an optimization that is both strictly a performance optimization and optimizes your app by reducing development effort.

Powerful, Configurable, Flexible UI

graphic showing powerful UI

There is a kind of “received wisdom” in the industry that highly configurable applications are necessarily slower, because they have to retrieve the configuration information from some database, and then they need a bunch of switching logic to render the view that the user has configured, so that’s slower.

This “received wisdom” is dead wrong, as we will demonstrate.

Perhaps even more important: never underestimate your end users.  They know their job far more intimately than you do; as a software engineer or even product manager, you are exposed only to the most pressing concerns of the moment.  

You think you understand the design of a particular screen very well, because you designed it or you coded it?  No.  Your end users know “your” screen far better than you do, because they have figured out the fastest way to use it to do their job, and usually, that isn’t the way you thought it would be used.

When it comes to web application optimization, the fastest, most scalable applications are the ones that have configurability baked in, where end users can configure the application to match their exact usage pattern, even as that usage pattern changes over time.

UI configurability has to be considered a baseline requirement in web application optimization.  Configurability enhances both performance and productivity.  

Let’s look at some specific examples.

1. Saved Search

This is a subtle one.  Saved Search is just a productivity feature, not an optimization, right?

We rolled out a saved search feature at one of our customers and within days the end users were saying wow, what did you do, everything is a lot faster?  But in this particular release, we hadn’t intended to roll out web application optimizations.  This release was mostly just new features, including saved search.

It was the DBA who finally figured it out: when you arrive at most screens in an application, there is a default search.  For example, in personal banking, the default search might show recent transactions.  In an issue tracker, the default search might be all open issues for your team.

Because of the pervasive saved search feature we had added, users were now configuring their default search, and replacing that default search with something more relevant to them.  In general, those user-specific searches were much lower data volume, made better use of indexes, and hence lowered the database load by enough that even users of other applications were noticing.

Further, before the introduction of the Saved Search feature, most users had a habit of arriving at the default view and then changing it to match what they actually needed.  So consider a user doing this:

  1. see default view (1st unnecessary request)
  2. change sort (2nd unnecessary request)
  3. add one criterion (3rd unnecessary request)
  4. add second criterion (view is now what user needs)

Saved Search – seemingly a convenience feature rather than an optimization – gives you a 4x performance boost in this common situation.

If that seems like a surprisingly large optimization to assign to Saved Search, realize that it’s actually even larger: sophisticated users need to switch between different views of data, and every time they do that, if there is no “saved search” feature, they do so by incrementally changing the search until it matches what they want.  So Saved Search not only reduces unnecessary requests when users arrive at a screen, it also reduces requests as end users switch between different views of the data that they need.

Once we fully understood the value of saved search as an optimization, we went to design a saved search feature that could be turned on by default in every single grid, so that in every app ever built with our technology, this particular web application optimization would always be there.

And we did succeed with that – it works automatically, just saving searches to the user’s browser (window.localStorage), but it’s pluggable so that you can instead save searches to the server, and also have admins that can create pre-defined searches.

You can see that working here.

2. Other Patterns of End User Configuration

The previous point about Saved Search can be generalized: when you allow your end users to configure your application, they go right to the data they need, they become more productive, and they reduce server load because they are no longer loading data they don’t actually need to look at.

Here are some other examples of configurability that are also optimizations:

– default screen to navigate to after an action

Wherever possible, let users pick the next screen to go to after completing something: at login, when done with a particular process, etc.  It’s easy to add something like a dropdown that says “After saving, go to: [list of screens]”.  Sophisticated end users will absolutely make use of such a shortcut.  You don’t even really need to come up with a way to save their preference across sessions, because with the very long session lengths of true web applications, holding onto that preference for just the session would already be a big boost (but it is better if you can persist it – more on that below).

– saved sets of form values

Imagine “Saved Search” but for forms: with any form in the application, you can save a certain set of values and name it, then re-apply those values whenever you are next using the form.  This is easy to build in a general purpose way, so that it can be simply “turned on” for any form in the application.

It’s clearly a productivity feature, but how is it a web application optimization?  Well, if filling in the form typically requires navigating two comboBoxes and a pop-up dialog, all of which may require searching through data..

– make your own dashboard

Sophisticated end users are perfectly capable of using a “report builder” or similar interface to create a “dashboard” containing the specific data they need to see – this is especially true of end users who are financial analysts, scientists or the like.  If the user isn’t able to create a dashboard directly they are likely to create one indirectly, often in a very inefficient way.

A dashboard builder is, of course, a non-trivial thing to implement.  However, some advanced frameworks have this as a built-in capability, easy to turn on for a given screen.

These are examples of configurability that can be applied to almost any application, but the fact is, in general, configurability is very application-specific.  The key takeaway here is to understand that configurability increases both productivity and performance.

As far as the perceived drawback of configurability – that you have to save the configuration, load it, apply it, etc – remember that as a developer, you have the option to save configuration in cookies, window.localStorage, and via other mechanisms.  Yes, configuration stored via localStorage will be lost if the user switches devices, and that means it’s not necessarily a good choice for something like a user-created dashboard, which the user may well have put some time into.  However, for something like a default screen to navigate to after login or after a specific workflow, it may be fine – a minor inconvenience that most users never experience, in exchange for configurability that you get “for free” – zero server load.

3. Rich search, sort, grouping, pivot – never skimp

Many times, I’ve had a customer say something like: “your search capabilities are really powerful, but the UI design calls for a simplified search interface, so we turned the entire default search UI off”

This is a terrible idea.  It has led to some of the worst performance problems I’ve ever seen.

Why?  Because the designer’s idea of the user’s needs is necessarily incomplete, and the user’s needs change over time

Having seen so very many projects, I can confidently tell you: the search capabilities you actually need are alwaysalways more than you think at first.

But how is this a performance issue?

Because, when the available search isn’t enough, users still need to get their work done.  So they are going to do the search they need to do, somehow, and often the approach they figure out is a performance catastrophe (not to mention the impact on productivity!).

This is one of the key areas in which B2B vs B2C UX design differs, in a way most designers do not fully appreciate: it makes sense to remove advanced search features from a B2C site.  It’s rare that a normal consumer would use them, and the removal of unnecessary search features can reduce server load. 

But B2B is completely different: your users need to get the search done.  If you don’t provide a way to do the search they need, they will come up with a way to do it, because, they have to.  It’s their job.

When the available search isn’t enough, users still need to get their work done often the approach they figure out is a performance catastrophe.

I have seen lots of clever end user workarounds for underpowered search UIs.  For example: one user needed to view certain data side-by-side, and the app didn’t allow it, and also restricted her to one session per browser, so she resorted to installing extra browsers and even VMs to get more sessions, in order to work around an app that just didn’t have the side-by-side view she needed.  Her usage was killing the server, but ultimately, the UX team could not come up with a better way of achieving what she wanted to do – her workaround was the best option available.

But by far the most common performance catastrophe from limited search, which I have seen no less than 5 times, is having users export to Excel and search there instead.

With no better option, users export enormous data sets to Excel; millions of rows in some cases.  In each case that I’ve seen this, the analysis that the user needed to do in Excel was not actually complicated; SmartClient’s built-in search features would have let them do it entirely in the browser, or at the least, would have allowed them to refine the search so the export would have been small and not a performance problem.

But instead, with the apps in question, where the UX design had specified a simplified or “streamlined” search interface, the users simply couldn’t do what they needed to do.  So of course they went to Excel, and in one case, a particular user’s morning “export” had about a ~30% chance of killing the server (out of memory error), which would interrupt everyone else’s work.

To be clear: simplified search interfaces are great.  From a UX perspective, you should definitely analyze user behaviors, determine the most common search use cases, and build a UI that allows users to execute those common searches with the minimum number of steps.  At the same time, you should give your users a more flexible & general-purpose search interface.  Simplified search and advanced search is not mutually exclusive – we have a SmartClient-based example here showing a straightforward UX that allows both simplified search and advanced search with no compromises, and that approach can also be achieved with other UI technologies.

If you provide only a limited search UI, you may well find that your highly optimized default search interface is indeed performing as expected – it contributes just a fraction of the server load – but real end users are using something else (whether gigantic exports, dozens of concurrent sessions, or whatever it is) to actually get their work done, and that is killing performance, as well as killing productivity.

Multi-Record Editing

representation of multiple records

Mass Update / Multi-Record Editing

What happens if you need a user to be able to edit multiple records and then save them all at once, as a transaction?  Although rare for web sites, this is a common interaction for true web applications.

Frequently, this scenario is handled with server-side storage of unsaved edits, either session-based or DB-based. Often, there’s a rather complicated mechanism of rendering a grid of the original records, with unsaved changes overlaid on top.  Because the unsaved changes are stored on the server, validation handling is a continuous chatter between client and server which is extremely inefficient.

We’ve seen multiple customer applications where there was just one screen that involved multi-record editing like this, and even so, it dominated the overall performance of the application.

There’s another way: a client-side component can queue up the changes, display them, validate them (including server contact where necessary), then submit them all as a batch.

When you have this capability, you get two massive web application optimizations:

  1. enormous numbers of Type II & Type III requests are eliminated, because the unsaved edits are tracked client-side, can be checked with client-side validators (including cache-aware validators), which radically reduces expensive server requests
  2. as discussed previously, you write a lot less code, and especially, a lot less complicated code. Multi-record editing with server-side temporary storage is rather complicated, and if you don’t have to write it, you have a lot more time to actually focus on performance

You may be thinking that this scenario is simply too much complexity to handle with client-side components. However, having first implemented this in 2005 (yes, seriously), we have the interaction very very well handled, with all of the necessary settings and override points to handle a wide variety of variations on the base scenario. There are many, many deployed applications using our approach.

We’ve actually made it so simple that it takes just one property (which we call autoSaveEdits:false) required to turn it on – check it out here. Open your browser tools to look at network traffic – you will see none until the final save.

For applications where this kind of interaction is needed (and it’s not uncommon) this single capability can make the difference between scalable and sluggish.

Tip of the Iceberg

For developers who have been indoctrinated in the school of optimization that is centered on “minimize bytes for initial load”, the set of concepts in this article may come as a bit of a shock.

But this is still just the tip of the iceberg.

If I had the bandwidth and the space to do so, some of the things I would cover would include:

  1. client-side grouping, pivoting and aggregations: giving users a variety of views of the same data set, without any further server contact
  2. ultra-flexible grids: why not give the user the ability to preview any long text field under the selected row?  Why not let them view related records, on the fly?  These flexible views can come with intelligent caching & data reuse, improving productivity and performance at the same time
  3. client-side SQL-like engines: turn an extensive data analysis session into zero server data requests.  This can support far more data volume than you might think
  4. multi-modal presentation: is it a classic list of rows, a tree, a set of tiles, or a node-and-spokes graph?  It’s all of the above, and you can switch on the fly, and the server need not be involved
  5. multi-containers: is it tabs, an accordion, portlets, a linear layout, a dashboard, floating dockable windows?  Again, all of the above, why not allow switching on the fly?  When users can take any part of your app and re-mix it, they invent their own best UI, and if they can share it, both productivity and performance skyrocket!

Methodology, not Technology

planning a methodology

Even though I have mentioned that our technology implements many of the web application optimizations described above, ultimately, this is not a technology, this is a methodology.

It’s a methodology that can be applied to any project or product, regardless of the technology in use, and it can be taught.

My team has saved countless projects & products.  In some, we replaced substantially the entire thing with our technology.  In some, we introduced our technology incrementally, in the highest value areas first.  In some, we never introduced our technology per se; we just applied the patterns.

All of these approaches are solutions that work.

If you are building a true web application, and you want some advice and help reaching the heights that we have reached, reach out.  We can give you a quick take for free (already extremely valuable), and go from there.

Misconceptions & Misunderstandings

From here, let’s go into some misconceptions that people have with the above techniques – lots of people don’t immediately understand how to apply these web application optimization techniques, or they believe that their particular web application just doesn’t fit the patterns that I’ve explained.

This is an understandable perception, however, as I will explain, these techniques cover literally everything that humans think about – which is certainly broad enough to cover your web application!

Aren’t all these optimizations just for CRUD?  My application has lots of non-CRUD operations

When people are introduced to these techniques, and especially to the idea of a framework that supports such techniques, some people end up with the misperception that these techniques apply only to CRUD (or only to SQL, or only to ORM, etc).  Here, by “CRUD”, we mean “Create, Retrieve, Update, Delete”, essentially the core operations of SQL systems, and also the basis of ER modeling (Entity-Relationship modeling), which is basically the basis for SQL/CRUD.

This perception that all of this applies only to CRUD is not actually true.

The first thing to realize is that CRUD/ER models simply reflect the way humans think.  ER models are a natural representation of data, not an artificial one invented to make coding easier.  ER modeling really applies to almost everything, not just to a narrow range of classic “business objects” like People, Accounts and Orders.

Specifically, humans naturally group things into types of objects (cars, people, orders, buildings etc) and those objects have attributes (cars have color and number of doors, people have name and height, orders have ship date and a status, buildings have # of floors and architectural style, and so on).

That’s all an Entity-Relationship model is: it just reflects the way people think.

Further, ER modeling has nothing to do with how data is stored, or even whether it is stored permanently at all.  ER modeling is just a way of thinking about and describing something, and is not limited to SQL, and can even be applied to completely ephemeral client-side concepts that never have server storage (for example: outstanding requests).

“non-CRUD” operations that are absolutely CRUD

In my experience, it’s very common for a developer to incorrectly assume that a given operation can’t fit in an ER model, and then go off and implement something custom, throwing away the massive benefits that a framework can offer around ER models – not just performance, but simplicity: error handling, uniform code structure, and other benefits.

Here are several recurring examples that will help to understand just how often “CRUD” works as a modeling strategy:

  1. background processes: you need to kick off a long-running process on the server.  Well, initiating the process is an “add” of a new BackgroundProcess record.  Checking on the process is a fetch of a “BackgroundProcess” record by its unique ID (and/or userId for multiple such processes), and the status of the process is simply a field on the BackgroundProcess record.  Is there an output of the process, like a URL or file?  That’s just a field in the record that is non-null once the process is completed.  Want to cancel the process?  That’s just a “delete” of a BackgroundProcess record.
  2. “documents”, “files” or other blobs containing non-ER content: you may have an OOXML document, binary image, XML/JSON blob, Java-style .properties snippet, or whatever.  The interior content – the OOXML document for example – may not be modeled in an ER/SQL style.  You may be storing it in a “NoSQL”-style DB.  None of that stops you from representing each such document, image or file as an entity/object/row in an ER model.  There are key metadata fields – lastUpdate, ownerId, dateCreated, etc – that fit perfectly into an ER model, and allow you to leverage caching, validation and error reporting features that are built-in to the ER model.  The “special” field – the one containing the blob of content that is not an ER model – is just another field of the record.
  3. asynchronous messages & reliable delivery: sending a message is an “add” on the Messages table.  Use a subsequent “fetch” to check the Message status, whether it was delivered, had an error, etc.  Want to cancel a Message that may not have been delivered yet?  That can be an “update” to set status to “Cancelled”, and if it’s too late to cancel message delivery, then that’s a validation error on the status field.
  4. login / sessions: logging in can be represented as just a fetch on the User table with hashed credentials.  If login is successful, the active sessionId is returned as a field value.  Then, session data is a fetch on the Sessions table with sessionId as criteria.  Logout is a “delete” on the Sessions table.
  5. calculations, estimate or quotes: why not represent these as a “fetch” on a Calculations table, with inputs to the calculation expressed as criteria?  Automatic caching can prevent duplicate calculations.  You can apply validators to inputs to the calculation and you get client and server enforcement.  Putting a quoteDate field on a Quote entity allows you to express validation rules making sure that no save relies on a Quote that is too old.  And so on.
  6. exceptions or error data: something can go wrong in a process or workflow?  Store these as “Errors” or “Exceptions” so you can look them up by the ID of the process, workflow or whatever may run into issues.  Then you can later “delete” them or set status to resolved, etc.

Many developers have a kind of “reflex” to take any operation that isn’t a CRUD operation on a business object and assume it needs to be represented as a custom, non-ER operation.  The reality is that it’s rather rare to find an operation that cannot be understood as a SQL-like operation on an entity; almost everything fits into the ER model, and fits in cleanly (not as a hack!).

If you believe you have an operation that simply cannot fit into an ER model, first, simply try to describe what needs to happen as if speaking to a colleague.  Doing this, you will often just trip over a natural CRUD representation: “I need to start a new background process.. oh.. that could be an add on a BackgroundProcess entity”.

If you miss the opportunity to represent an operation as a standard ER operation, you will miss out on automatic validation handling and/or intelligent caching that would be automatic if you used the standard ER approach, and worse, you will often end up building unnecessary, bespoke versions of capabilities that are built into the core ER model.

For example, multiple times I have seen people build a form to kick off a background process, and once they realized there were error conditions to handle, they went and created a custom error reporting format and added custom code to the form to receive, process and display the error.  This is all totally unnecessary – represent the operation as an “add” of a new BackgroundProcess, and all of the above is automatic.  That includes client-side checks that can prevent an invalid request from ever needing to be caught by the server!

But is it really CRUD / ER?  Let’s get abstract

ER diagram

If you are still not convinced, let’s just consider how truly elemental the CRUD operations are:

  1. create: takes a series of typed inputs
  2. retrieve: takes partial values for existing entities (criteria) and returns a list of existing entities
  3. update: takes a unique ID for an existing entity, and a series of typed inputs
  4. delete: takes a unique ID for an existing entity

Does your operation take a couple of inputs that come from an end user typing something in, and can there be validation problems with those inputs?  OK, use CRUD (“create”).  Otherwise you will reinvent validation handling.

Does your operation return an Array of Object?  CRUD “retrieve”.  Then you don’t have to invent your own bespoke protocol for passing inputs and receiving outputs, and if you ever want to display things in a grid or drop-down, or ever want searching and sorting, it’s free – it comes as a free benefit of representing your operation as a CRUD operation, of whatever type.

Yes, CRUD/ER is that universal.  That’s why all DBs, including NoSQL DBs, implement the 4 CRUD operations.

Further, if you take a CRUD approach to your operation, in a UI framework like SmartClient, you get:

  • an instant form for providing inputs to the service
  • an instant grid or detail view for showing the results from the server
  • per-input validation and standardized error handling, including, eg, display of errors
  • various options around caching of responses, or bypassing the cache for specific circumstances, etc

In short, if you ever have an error case that is relative to a specific input, it’s CRUD.  If you ever have a list view to display, it’s CRUD.

And once you recognize the CRUD/ER way of representing your operation, absolutely everything covered in this article applies – all of the potential web application optimizations apply to the operation that you initially considered to be “not a CRUD operation”.

Why not both?  Client-side intelligence and minimum number of bytes downloaded

People sometimes ask: why can’t we have SmartClient’s web application optimization features, but also download a minimum number of bytes on the first ever load?

Usually, this is an ill-posed question.  It comes from either:

1. someone who wants to use SmartClient in a very simple, consumer-facing app where each user visits only once, or visits rarely, and with brief sessions.  Often, SmartClient’s ability to instantly integrate with data has gotten a developer excited – they made far faster progress with SmartClient than with any “lightweight” technology!  However, SmartClient is not the right solution here, and people from Isomorphic will happily tell you so, and guide you to other technologies that are more appropriate

2. someone who is building a true web application, as covered in this article, but who simply can’t let go of the optimization principles they have learned regarding web sites.  They somehow want both: they want a framework that is so light that a casual visitor will barely notice the download, but, is so powerful that a power user has rich functionality

It’s the #2 crowd that I would like to address here.

First of all, I would readily agree that there are a lot of supposed “tradeoffs” in software design that are not real tradeoffs.  As I have covered above, with respect to web application optimization, configurability vs performance is not a real tradeoff: you can get both, and indeed configurability (such as Saved Search) is actually an optimization.

It is also common for developers to assert that an API or UI can either be easy to understand or can be flexible and powerful: it cannot be both at the same time.  I don’t agree.  I have designed many UIs and many APIs that are simple for novice users, yet flexible & powerful enough to handle extremely advanced use cases.

So when I tell you that there is a real tradeoff in the design of web application optimization – bytes downloaded vs client-side intelligence & optimizations is mutually exclusive – understand that I would love to design a system that handles the entire spectrum of use cases, but it’s just not there.

To understand this, consider designing a fighter jet that is also a good commuter car.

You need to engage enemy jets in a dogfight, and strike a target at night that is defended by radar-guided missile batteries, but also, you need to toodle 10 miles to work each morning, dropping by a drive-through coffee shop.

You can do certain things that work for both use cases – why not have a comfy seat?  But then when you go to design an engine that can go Mach 3, you find that, no matter what you do, you are not going to fit that engine into a single lane on a freeway, at least not without toasting the car behind you.  You could design foldable wings, but, those wings will not be able to survive the stresses of dogfighting at Mach 2.  And so forth.

SmartClient’s architecture is the fighter jet of web applications.  SmartClient is unapologetically heavyweight, because, SmartClient can go Mach 3, and the “lightweight” solutions out there cannot. In fact, they cannot even get in the ballpark of SmartClient’s performance: with a “lightweight” technology, the page will load quickly, and then the users will wait and wait as they do their actual work.

There have been a number of technologies that claim to be both the “fighter jet” and the “commuter car” at the same time.  Without exception, they don’t actually deliver.  For example, an old Google GWT demo showed off an impressively low number of bytes for a tabbed pane and a button.  But this interface had zero interactivity.  If you added an event handler or text input field or anything of the kind, the entire remaining framework code, which had been trimmed off for this specific sample, would be downloaded: it was no longer “lightweight”, at all.

There’s a simple underlying reason for this, that any software engineer can understand: when you design a system well, you re-use as much as you can.  In SmartClient, that means that the drop-downs for selects and comboboxes are actually the same as the grid component – data-binding works the same way, you can apply formatters, etc, it’s all the same API.  

The grid itself is an instance of the core layout class, so you can insert custom components (like a custom toolbar) into the middle of the grid component.  

The form components for standalone use are the same as the ones for inline editing in a grid, and have all the same APIs and customization capabilities.  You can literally use the same customized editing controls for inline editing in a grid and in a standalone form.

Similarly, when people use components like SmartClient’s menu or combobox system, they see the same APIs they used to configure grids: formatting configuration, extra columns, icon declarations, the works.

What’s the general principle here?

Minimizing bytes downloaded for a narrow use case is in direct conflict with reuse.  This is an inherent conflict, irreducible.

And, as a simple corollary: when you need a robust feature set, SmartClient, which features a huge amount of reuse, is going to be by far smaller than combining a bunch of different components from different vendors, which will necessarily include repeated re-implementations of the same core capabilities, over and over again, all of which need to be downloaded to make a complete application.

If someone could deliver SmartClient’s full feature set in an astonishingly low number of bytes, I would love that.  In the meantime: SmartClient outperforms other web application technologies by 30x or more, and this is something that can be easily measured and verified.

If someone is interested in working with us to create a technology that broadens SmartClient’s reach, a way of maximally blending full-power web application optimization with minimum downloads, we would be delighted to do that.  There are definitely applications in the middle ground between true web applications and web sites, where such a “blended” technology would be useful.

But if you are trying to create a web application today?  Even if you do not use SmartClient per se, the SmartClient architecture is the right one, and by an enormous margin – it’s not close.


green light for go, nothing can go catastrophically wrong

I hope I have given you some tools to think about the design and architecture of your application, and how to approach both optimization and implementation.

Although I have referred to SmartClient technology in a number of areas above, again, what is covered in this article is actually a methodology, not a technology per se.  This methodology can be applied with any technology, and if this article was not enough of a guide, Isomorphic can help you with it.

If you have any feedback on this article, I would love to hear from you!  There is plenty of room to improve on what’s here.  The best way to get in touch is to Contact Us.

About the Author

Charles Kendrick has been the head of Isomorphic Software for over 20 years. He is the Chief Architect of SmartClient and Reify. Both products are heavily used and very popular amongst the Fortune 500. He has successfully worked with and coached dozens of teams delivering high performance systems and is a pioneer in the field of web application optimization.