Privacy Policy and Cookies

By continuing to use our site, you agree to our Privacy Policy and our use of cookies to understand how you use our site, and to improve your experience. Learn More.
I Agree.

Software Evaluation Simplified: Success 1st Time, Every Time

Last modified date

software evaluation choosing the right tool

Software evaluation is an overlooked intricacy, and as Isomorphic’s CTO, I’m typically brought in to save projects that have gone off the rails, so I’ve got 20 years of experience in vigorous facepalming.

Very often, we are approached by companies that started with another technology and have hit a dead end. Typically, we end up inserting our more sophisticated components into the middle of existing screens, and then the customer migrates to our technology over time, slowly, painstakingly cutting through the spaghetti code they had to write because they didn’t start with us.

Whenever this happens, I always try to figure out how the customer ended up using some other technology rather than starting with ours.

Sometimes, they just didn’t do any software evaluation at all. People blindly follow trends, and developers are just as guilty of this as anyone else.

However, sometimes, we run across a customer that did evaluate our technology, and decided against using it, only to regret that decision later.

This happens because people evaluate software in the wrong way.

I’ll explain what I mean with a story.

A Relatable Analogy for Software Evaluation…

Let’s say you are trying to figure out which vehicle would be best to use when entering an endurance race, such as the Le Mans race, in which the winner is the car that covered the most ground in 24 hours.

As a first step, you try to figure out if you can get the vehicle to go 20 feet.  A reasonable first test, right?  Clearly a vehicle that can win an endurance race must be able to go 20 feet with ease.

So here are the two possibilities you’re evaluating:

  • the vehicle that won Le Mans last year

.. or ..

  • a tricycle

After testing them out, you determine that both can go 20 feet.  However, the vehicle that won Le Mans gets poor marks because:

  • You had to find the keys
  • You had to open the car door
  • You had to turn the key to start the engine
  • You had to shift into gear
  • It wasn’t obvious which pedal to push to go

So clearly, the tricycle is the better choice for Le Mans, and the next step is to commit to the tricycle and see how fast and efficient it can be made.

Except, obviously not, right?

So what was the mistake? 

Mistakes made when Evaluating Software

The mistake was: you didn’t test whether the vehicle could do well in Le Mans, you tested whether it could go 20 feet.

And if the task is going 20 feet, then a tricycle looks pretty damn good, because in general, a technology is going to look really good when it’s doing the most that it was designed to do, and is going to look not as good if it’s asked to do something that’s a little too simple.

Now you may be thinking: that’s ridiculous! No one makes decisions that way.

Ah but they do. It’s just that, when evaluating software, things are more complex, and it’s not as blindingly obvious that you are comparing a race car to a tricycle.

Software Evaluation: The Key Criteria

Here are a few real-life stories of competitive evaluations where our technology “lost”, only to have the customer come back to us later:

Comparing grids by connecting to a rudimentary data service

Our product, SmartClient, can be instantly connected to any SQL table, JPA or Hibernate entity, and without writing any code, you get a dizzying array of features: advanced search, multi-field sorting, data paging, grouping, saved search, pivoting, joins, aggregation, aggregated joins, etc. SmartClient gives you built-in UI for expressing complex nested searches and things like on-the-fly joins, aggregates and pivots and also gives you the entire system for executing those complex queries, all the way to the DB, with many, many carefully constructed layers of customization possibilities and override points.

Yet, multiple times, we’ve had evaluators try to compare frameworks by connecting to some kind of free public data service, or to a data service created as a tutorial. Invariably, these services are very basic: they don’t support paging, advanced use of criteria, sorting, editing of any kind, or any other advanced features. These trivial services get used for a POC even though the final application would definitely not be using such a service, and the final application will definitely require the advanced features that SmartClient provides out-of-the-box.

In this type of evaluation, it takes about as much code and about as much effort to connect SmartClient to the target service as it takes to connect some other, underpowered system. The necessary SmartClient code mostly turns a bunch of features off to deal with such an underpowered service, and then adapts to a poorly designed protocol that is not built for an enterprise UI.

As a result, the final UI and total effort is about the same with either technology, and we might “lose” the evaluation because of something like: the competing technology has a theme that happens to be closer to the customer’s color scheme.

Had the POC requirements matched the requirements of the actual application being built, the result would have been: SmartClient provides the full required feature set and more with zero effort, and the competing technology would take multiple years of R&D to match what SmartClient provides out-of-the -box.

It usually takes the customer a few months of development effort to come to the same conclusion, then they come back.

Building a Login Dialog as a head-to-head comparison

First of all, a login dialog is an extremely trivial bit of UI.  For a framework being considered for an enterprise UI, you will ultimately need to do things like allow a user to edit multiple rows, deal with cross-row validation errors, and submit all changes as a transaction.  A login dialog is a joke next to this scenario.

Secondly, a login dialog is an extremely specialized piece of UI.  The entire screen is dedicated to this simple, two field form (username and password).  This situation occurs exactly once in each enterprise application, and then never again.  Everything else in the app treats space as a premium, and has very, very difficult requirements in basically every area, from layout to data services to error handling to caching – literally nothing about the requirements around a login dialog match the requirements that will be applied to every other screen in your application.

Thus, when you go to build a login dialog with enterprise form components, you will be reversing a lot of default settings, because the default settings are geared toward typical enterprise forms. For example, visually, you pretty much have the entire screen for two simple text fields, so you might as well make them enormous, and use heavy rounding, which doesn’t work in other contexts (looks weird with square drop-downs, doesn’t fit into square places like table cells, etc).

Worse, you are using precisely zero of the features you will later be relying on – features that will save you reams of code when you go to build an actual app.

Thus, the erroneous conclusion of this evaluation was: it takes several settings to get a simple login dialog, looks like this other framework is easier!

4 months later they came back: We tried to build a real form with that other framework and it was nearly impossible to achieve the things you show in warmup samples. Then we looked at accessibility, internationalization and mobile support and it was a no go. Can you help us recover?

There’s a second reason this evaluation was flawed: true enterprise frameworks are fairly heavy (by design) so the best practice is use a plain HTML login page. This allows you to begin caching your application while the user is logging in.  We even provide such a starter login page, complete with caching logic.

So not only was a login dialog precisely the wrong choice for a head-to-head evaluation, there is no need to build a login dialog in an enterprise framework in the first place!

Replicating a “spacious” design that has no place in enterprise UI

People like UIs to look good, and in a demo of UI components, one of the easiest ways to look good is to create a very “spacious” design, where controls are oversized, a huge amount of padding is used, and enormous, attractively-styled error messages appear in the middle of the form layout, right under the item that has the error.

The problem here is that in enterprise apps, space is at a premium, and there are multiple panes and components on the screen all needing as much space as possible. The “oversized” look works for a simple web page, but not for an enterprise app.

Our platform correctly defaults to showing validation errors as just a compact error icon, which avoids misaligning typical two-column forms, and avoids creating scrolling due the form growing in size.  In trying to match a design featuring oversized controls and gigantic error messages, the evaluator is trying to replicate an appearance which you do not want.

It’s straightforward to get the spacious look with our technology, for the rare case that it makes sense. However, in one example of this kind of botched evaluation, the design team worried that they might be “fighting” against our platform’s default look and feel choices, and went with another technology. They came back about 8 months later, having scrapped the old design after criticism of early prototypes, and began using our default look and feel with some customized colors and fonts.

Trying to apply CSS-based layout techniques that work for static content pages

Multiple evaluators have tried to copy CSS-based layouts that they find in “how to build a two-column page” tutorials, and realized that this doesn’t work, because our layouts are more than just CSS.  CSS-based layouts simply cannot do what our platform can do, in terms of features like Adaptive Width.

So called CSS-based “mobile adaptive” frameworks simply switch to a completely different layout for smaller screens, rather than maximally taking advantage of screen space, as our platform can. 

So here, a strength is perceived as a weakness, and the evaluator decides that a crude CSS-based layout system is the better choice.

In one instance, a few months later, a product manager called us up complaining that his developers were saying that certain layout behaviors were “impossible”, but he could see them right on our website! That ultimately led to switching back to our technology.

So how should you evaluate software like ours?

Our advice is to take the most difficult and complicated screen you have, the one where you’re not even sure how to approach it yet, and try to build that.

Think about what it means that we would advise this. We are the real deal; we don’t take shortcuts and we don’t fake things.

And finally, what are the consequences if you make a mistake, and choose an underpowered technology?  Your product designers are repeatedly told that certain features would take too long to implement, so the scope has to be reduced.  After a painfully long and badly delayed development process, in which the developers repeatedly try to re-create features which are already present in SmartClient, finally a 1.0 version shambles out the door.  

This 1.0 version is like the tricycle at Le Mans: some kind of engine has been bolted onto the side, which belches smoke and has a tendency to slice off limbs, and the tricycle must be ridden at low speed or the wheels melt!

Meanwhile your competitors, who used our software, entered the race months ago with sleek, flexible, blazing fast vehicles.

Don’t be on Team Tricycle – use the right tool for the job!!

About the Author

Charles Kendrick has been the head of Isomorphic Software for over 20 years. He is the Chief Architect of SmartClient and Reify. Both products are heavily used and very popular amongst the Fortune 500. He has successfully worked with and coached dozens of teams delivering high-performance systems, and is a pioneer in the field of software evaluation.