Speed up your website…

If you make dynamic websites and heavily use javascript, maybe these hints can help you out.

1. merge all your javascripts in one “big” javascript. Reason – to reduce number of network calls.

2. compress you javascript file. I don’t mean to gzip them, it is the next step. By compressing I mean to remove all unnecessary whitespace and comments. That can significantly reduce file size.

Here are some of the online javascript compressors:

* http://www.creativyst.com/Prod/3/
* http://www.dithered.com/javascript/compression/
* http://rumkin.com/tools/compression/compress_huff.php
* http://javascript.about.com/library/blcrunch.htm
* http://javascriptcompressor.com/
* http://www.brainjar.com/js/crunch/demo.html

And one excellent commercial: http://www.codehouse.com/products/soc/
(there is 30-day trial period, so you can test it).

My experience showed me that you should not trust these tools completely. Take your javascript, run it through compressor and put it back to your application to see if it use. Most of those online tools made few errors here and there. Commercial tool SOC did the job without any error.

3. gzip response. This is widely used and almost a must in all today’s applications. Usually you would configure your webserver for this (Apache mod_gzip or for Tomcat compresion=”onî in server.xml), but you can do it also over servlet filter.
Here is one that works – download gzip.rar.

All these things reduced my .js size by more then 60% and that is a huge win.

Vint Cerf + Google visit in Zurich

I just came from one very interesting presentation called “Tracking the Internet into the 21st Century” held by the Internet God itself – Vint Cerf. It was really amazing and inspiring, almost unreal, to actually see a guy who created the Internet. He showed how one experiment he did at Stanford University led him to a leading position in ARPANET team, where he was able to create TCP/IP protocol. Believe it or not, he is responsible for IPv4 standard where all IP address are 32 bit – represented in the form like we know them now e.g. 176.186.75.2

OK, he admits now that it was a wrong decision, but back then it seemed perfectly appropriate. Why wrong decision? Because, according to some estimations made, somewhere in the year 2010 we are going to run out of available IP addresses. They are already working on a solution, don’t worry. It is IPv6, 128 bits IP address. Problem is adoption of course. All nodes on Internet must comply to this new naming standard, and that takes time. So, one scenario that may happen is that there can be “isolated islands” of nodes that support IPv6 but you might not be able to access them. For example, you type “www.some-future-website.com” which on DNS resolves to 187.234.567.432.567.322.345.678.78.645.32.44. If your provider is not supporting IPv6, you won’t be able to see that website.

Luckily, these guys are doing their job very well and the doom day will definitely be avoided.

Other thing that seems like a real science fiction was InterPlaNet Internet project. They are working on a protocol that all “space ships” would use. So, as one by one satellite is launched in the space they will be able to communicate between them with this protocol. That is actually how “earthly” Internet started. One by one node is formed. Imagine the same situation, but this time in space. Cool and weird in the same time. But isn’t that what people were saying about Internet in the beginning also? Think about it.

Vint Cerf works for Google now and this presentation took place in Google development center in Z¸rich. We had a little tour around their offices afterwards. Impressions? Well, mixed.

Offices look really cool. Relaxing rooms all around, hanging sleep bags, power balls, toys and everything is very colorful. Really “googley” as they call it. A bit scary was that it was around 21:00 and a lots of engineers were still there. Are they too busy or too geeky or too googley, I don’t know. I just hope they are sitting there because they don’t have private life, and not because deadlines are too short and project managers are hanging over their necks to get the job done as soon as possible.

All in all, an excellent evening. I guess my private life sucks as well, when I consider that an excellent evening. 🙂

Breadth first search algorithm in Java (with visited paths)

Basically, there are two widely used algorithms for traversing graphs – BSF (Breadth First Search) and DSF (Depth First Search).

BSF – goes level by level in graph. It visits first all parent’s children, then all children’s children and so on. For implementing BSF, most important keyword is – queue. First you take all parent’s children and examine them one by one. If you don’t find what you are looking for on that level, you take all children’s children (grandchildren) and put them at the end of the queue. Then you take a next element from the queue and continue. With BSF you are sure you will find “closest” nodes. “Closest” means with shortest vertices.

DSF – goes in depth first. Reads all children, if not found then takes one child and reads his children, and so on. For implementing DSF, most important keyword is – stack. You put one by one node on stack, and recursively pop them up.

There are many BSF/DSF implementations in Java available on net, but somehow none of them fitted my needs. I needed not only to find *all* possible connections between two nodes, but also to remember all nodes in between that I have visited. Actually, I wanted to know all connections between two nodes Nx and Ny.

Here I come…

Ok, it was about a time for me to start writing a blog.

I have been stuck with Java for 7 years now. Lots of things happened in the mean time and this blog is going to be useful mainly to me because I am going to write things I came upon during everyday work. So, not to forget them I will write them down. Here.

If you find it useful, you are welcome to come back again.

Cheers,
Nemanja