It was the year 2000 (o so).
I was a happy and very productive VB6 developer, when something new in the software development world was coming (at least in my country): The web development (dynamic pages).
As any developer that started with C language, these kind of changes always excited me.
I started my baby steps with classic ASP, but quickly I noticed that building a web app was not as seamless as working with my old winforms. There were so many "manual" tasks… If I wanted to create an application like any of my VB6 already created, it was much more work and very difficult, or even impossible.
As many other technologies, I discarded it, thinking it was going to be just another not successful trend. There was no way such a messy thing could last!. I was convinced that anything not as productive as the VB6 way of programming would not have a chance.
I was wrong.
Web applications continued growing and growing. The way of developing them changed, several times. Microsoft tried to make it similar to building winforms apps, but, for me, the productivity was not even close
(and even when it evolved a lot, it's not there yet).
In a series of or articles I will show why I think web applications' technologies (and its development) are a painful way -that some deamon dropped off in earth to laugh from us the humans (oh well, almost humans: programmers)- , compared to the good old a winform app development.
I will go through some dimensions, but let's start with
A web application (WA), is slower than a desktop app. There is no doubt about this. (if you have any, you need to go back to study how compiled programs work)
But let's see a bit (just a bit) deeper.
Imagine you have two applications, with just one screen, to show a grid with data from a database. Now, let's see how they work:
|Web App (WA)||Desktop App (DA)|
|User clicks on icon||User clicks on icon|
|Browser is loaded in memory||Application is loaded in memory|
|Browser requests application page to Web Server|
|Web Server communicates with the database to get data||Application communicates with the database to get data|
|Web Server sends the view and the data to browser|
|Browser renders view and data||Application renders view and data|
|User sees data||User sees data|
If you count the steps, DA performs less steps, but this is not as important as the fact that there are less participants as well.
Between your user and the data, you have not only your application but two intermediaries more, that has nothing to do with your app functionality: The web server and the browser.
These two intermediaries are a waste of communication and processing efforts, and the main reason is HTTP protocol:
Web server and browsers talk in HTTP protocol. This protocol is string based, but, of course, your data model is not, so, a conversion needs to be done. And a conversion (as any operation) in a CPU is expensive.
"Well" -you could say- "then why don't we have our model in strings, to avoid this conversion?"
Let me answer to that with this quote, from http://ubjson.org/
"Marshalling native programming language constructs in and out of a text based representations does have a measurable processing cost associated with it.
In high-performance applications, avoiding the text-processing step"... "can net big wins in both processing time and size reduction of stored information".
In the server side, a coding/decoding needs to be done, in order to work with native objects of our model. Remember, our model is not defined as a long string, but in objects. This constant transformation is not only a step adding unnecessary overhead to the process, but also adding effort to control possible errors in the transformation.
But what happens in the client side?
The browser needs to do the same effort, to interpret the string.
So, in a WA: web browser encodes (request), sends(request); web server receives(request), decodes (request), encodes (response), sends (response); web browser receives(response), decodes(response) and render (data); while a DA just renders(data). Now, you can compare the amount of steps and calculate the effort.
Another example of binary versus string processing? Let's take a look at a screenshot of my browser with "Gmail" open versus "Outlook" desktop version.
You can appreciate how much memory each of them is using. But that's not all: In my "Outlook" I have 14 plugins loaded (because of company policies) and the web tool functionality is not even comparable to what the desktop counterpart offers me.
I think my point is proven. but you don't have to believe me. I invite you to try it yourself: Create an application of each type, that renders, 10000 rows in a grid and let's see which one is the fastest.
Oh, by the way, our next technology challenges: High Performance Computing and Big Data, will need all the power a CPU can provide… will we continue wasting it by using intermediaries and strings?