Getting Rid of the Navigation Click Sound on IE

Internet Explorer plays a little click sound when the location of a page changes. This is a great usability feature as it lets the user know that something is happening. However, this little click sound can become really annoying with web applications that make extensive use of iframes. For example, imagine an application using several iframes to sandbox untrusted third party ads. Refreshing the content of the iframes (by changing their src attribute) makes the application sound like an automatic rifle. Scary, huh? Well, I recently found out about a very clever trick that can make this click sound go away.


iframe.src = "...";


var newIFrame = document.createElement("iframe");
newIFrame.src = "...";
iframe.parentNode.replaceChild(newIframe, iframe);

The secret is to set the iframe src attribute before appending it to the document… However, this simple approach exhibits a very unpleasant flickering. The following snippet shows how to get rid of the flickering as well:

function setIFrameSrc(iframe, src) {
    var el;
    iframe = YAHOO.util.Dom.get(iframe);
    if ( {
        // Create a new hidden iframe.
        el = iframe.cloneNode(true); = "absolute"; = "hidden";
        // keep the original iframe id unique! = "";
        // Listen for the onload event.
        YAHOO.util.Event.addListener(el, "load", function () {
            // First, remove the event listener or the old iframe
            // we intend to discard will not be freed...
            YAHOO.util.Event.removeListener(this, "load", arguments.callee);
            // Show the iframe.
   = "";
   = "";
            // Replace the old iframe with the new one.
            iframe.parentNode.replaceChild(this, iframe);
            // Reset the iframe id.
        // Set its src first...
        el.src = src;
        // ...and then append it to the body of the document.
    } else {
        iframe.src = src;

This example demonstrates this technique (turn the volume up, and open it up with Internet Explorer) Note: the navigation sound you hear when using Internet Explorer (or Windows Explorer) can be configured via the Sounds dialog accessible from the Control Panel.

Update: I forgot to mention that, by using this trick, you will break the back / forward navigation buttons. Therefore, this trick should only be used for a non-navigational purpose only.

Posted in Web Development | 7 Comments

What happened to Gmail?

I have been using Gmail every since it first went public. I’ve always liked it for its simplicity, reliability and performance. When I read that the Gmail team had done some work to make their product even faster, I could not wait to get my hands on the new version. This morning, I was finally able to try it out and I was extremely disappointed. Firefox kept crashing, locking up, and the overall performance went down dramatically (in spite of disabling Firebug and turning the chat feature off.) I have read many such reports on other blogs, so I am wondering what the QA procedure is at Google. Maybe it’s time to put the “beta” back in Gmail, or roll back the previous version…

In the meantime, I’ll switch back to the good old Yahoo! Mail.

Update: Gmail is still in beta. For some reason, I thought they had passed that stage. My apologies for the misinformation.

Update: The Gmail developers kept a link to the older version (look at the links located at the top right of the page) Very wise decision indeed.

Posted in Uncategorized | 6 Comments

Running CPU Intensive JavaScript Computations in a Web Browser

The pattern discussed below is a well known pattern that has been used for 10 years. The goal of this article is to present this pattern under a new light, and most importantly to discuss ways of reducing its overhead.

The biggest deterrent for running CPU intensive computations in a web browser is the fact that the entire browser user interface is frozen while a JavaScript thread is running. This means that under no circumstance should a script ever take more than 300 msec (at most) to complete. Breaking this rule inevitably leads to bad user experience.

Furthermore, in web browsers, JavaScript threads have a limited amount of time to complete (there can be either a static time limit — that’s the case of Mozilla based browsers — or some other limit such as a maximum number of elementary operations — that’s the case of Internet Explorer) If a script takes too long to complete, the user is presented with a dialog asking whether that script should be terminated.

Google Gears provides the ability to run CPU intensive JavaScript code without the two aforementioned limitations. However, you cannot usually rely on the presence of Gears (in the future, I would like to see a solution like the Gears WorkerPool API as part of the standard browser API)

Fortunately, the setTimeout method of the global object allows us to execute code on a delay, giving the browser a chance to handle events and update the user interface, even if the timeout value passed to setTimeout is 0. This allows us to cut a long running process into smaller units of work, and chain them according to the following pattern:

function doSomething (callbackFn [, additional arguments]) {
    // Initialize a few things here...
    (function () {
        // Do a little bit of work here...
        if (termination condition) {
            // We are done
        } else {
            // Process next chunk
            setTimeout(arguments.callee, 0);

This pattern can also be slightly modified to accept a progress callback instead of a completion callback. This is especially useful when using a progress bar:

function doSomething (progressFn [, additional arguments]) {
    // Initialize a few things here...
    (function () {
        // Do a little bit of work here...
        if (continuation condition) {
            // Inform the application of the progress
            progressFn(value, total);
            // Process next chunk
            setTimeout(arguments.callee, 0);

This example demonstrates the sorting of a fairly large array using this pattern.


  1. This pattern has a lot of overhead i.e. the total amount of time required to complete a task can be far greater than the time it would take to run the same task uninterrupted.
  2. The shorter each cycle, the greater the overhead, the more reactive the user interface, the greater the overall time required to complete the task.
  3. If you can be sure that each iteration of your algorithm is of very short duration — say 10 msec, you may want to group several iterations within a single cycle to reduce the overhead. The decision whether to start the next cycle or continue with more iterations can be made based on how long the current cycle has been running. This example demonstrates this technique. Although it uses the same sorting algorithm as the example above, notice how much faster it is, while still keeping the user interface perfectly reactive.
  4. Never pass a string to setTimeout! If you do, the browser needs to do an implicit eval every time the code is executed, which adds an incredible amount of completely unnecessary overhead.
  5. If you manipulate global data, make sure that access to that data is synchronized since it could also be modified by another JavaScript thread running between two cycles of your task.

Finally, consider running this kind of task on the server (though you’ll have to deal with serialization / deserialization and network latency, especially if the data set is large) Having to run CPU intensive computations on the client might be a sign of a deeper, more serious architectural problem with your application.

Posted in Web Development | 5 Comments

The Birth Of Web 3.0

Is Web 3.0 yet another buzz word, or is it a real turnaround in our industry?

Web 1.0 was the good old web of the 1990s. In those times, all client-side changes were the result of a server round-trip. The Internet was ramping up in popularity.

Web 2.0 has been a little more than just a technological evolution. The staple of Web 2.0 has been the emergence of social media (Internet users creating most of the content), powered by mature technologies (DHTML, Ajax) on somewhat stable web browsers.

Web 3.0 is not a revolution either. It is yet another technological evolution destined to provide users with an even better experience, both online and offline. Web 3.0 will lead to the blurring of that artificial wall between the web browser and the desktop, providing a full — but secure — integration with devices and services exposed by the operating system.

Web 3.0 is just starting. Look around you and you’ll see that Web 3.0 technologies are slowly cropping up everywhere on the web. Google Gears, one of the first Web 3.0 technologies, allows you to build web applications that can work offline. Thanks to Google Gears, applications such as Remember The Milk, an online to-do list and task management system, can now work offline. The Adobe Flash player already allows application developers limited access to the webcam and the microphone. Soon, we’ll also be able to drag and drop files from the desktop to a web browser (see this Java Upload Applet for an example using the Java technology)

Another aspect of Web 3.0 is the use of stunning graphics, smooth animations, high definition audio and video, 3D, etc. and all of this inside a web browser!

At first, Web 3.0 features will be available using plugins (Google Gears, Java, Flash, Silverlight, ActiveX and Firefox extensions, etc.) But slowly, we may start seeing browser vendors integrating them into their browsers, followed by some level of standardization. The HTML 5 Working Draft seems to be going in the right direction.

These are exciting times for web front-end engineers! The risk of fragmentation, inevitable with such ground-breaking technologies, will hopefully be mitigated in the short term by the use of JavaScript toolkits. The Dojo Toolkit, for example, has already started making Web 3.0 features available (see dojo.gfx and the Dojo Offline Toolkit) Hopefully, all the other major frameworks will follow suite so we can all start building cool new applications that wow our users!

Posted in Web Development | 7 Comments

The New Yahoo! Search Has Finally Arrived!

Yahoo! launched a new version of its search engine today. Until now, I was a Google user simply because Google’s results were a little bit more relevant, and also because it seemed a bit faster. However, the new Yahoo! Search has won me over. Here’s why:

First of all, the Yahoo! search page has been simplified to the extreme, which makes it load extremely fast. Second, the search page now has an auto-complete feature, similar to Google suggest. I had been waiting for this feature for a long time! Finally, Yahoo! has made huge improvements to the search results page, embedding rich media within search results, and adding an assistant to help you refine your search, and even explore related areas that you may not even have been aware existed! This is simply brilliant! Not only searching with Yahoo! quickly and efficiently leads you to what you were looking for, but it has also become a fun learning experience! Give it a try, and like me, you’ll quickly adopt it!

Below is a screenshot of a search for Nelly Furtado:

Posted in Uncategorized | Comments Off

Adobe MAX

I am currently attending the Adobe MAX conference in Chicago, IL. Yesterday’s keynote was a great showcase of what Adobe’s latest technologies are about to bring to the web and to the desktop. Here are a few pictures of the keynote (hover over the images to get a short description)

Kevin Lynch's keynote

An AIR application running Google Analytics

The winners of the AIR challenge

Adobe Developer Connection announcement

Heidi Williams talking about the new features in Flex 3

The Flash Player team talking about the new features in Flash Player 10

I have been involved with web development since, and Ajax since (before it was even coined “Ajax”) I have looked at Adobe’s technologies for a while, and have finally come around. I have to admit that Flash Player 9, ActionScript 3, Flex 2 (Flex 3 coming very soon) and Flex Builder 2 (soon Flex Builder 3) make for a very solid development platform to create rich Internet applications. Add AIR (Adobe’s Integrated Runtime) to the mix and you have a great platform to develop cross-platform, web-enabled applications.

My only problem with Adobe’s technologies is that they are proprietary technologies. However, this only outlines the failure of the W3C and other standards bodies to push the web forward. And with an incredible 90% market penetration (according to Adobe), isn’t Flash a De Facto standard anyway?

Update: After attending the day 2 keynote, I keep thinking that while so many web developers are still trying to figure out how to make rounded corners work on all browsers, Adobe is really pushing the envelope with truly ground-breaking technologies. What a contrast!

Posted in Web Development | Comments Off

YUI Compressor Version 2.2.1 Now Available

I implemented a few enhancement requests and fixed a bug in this new version of the YUI Compressor. Let me know if you encounter any issue with it.

Update (9/27/07): YUI Compressor version 2.2.2 is now available. It fixes a lot of bugs that have been reported recently. By the way, I really appreciate all the bug reports, so keep them coming!

Update (9/28/07): New bugs have been reported and fixed in version 2.2.3 now available for download (check out the CHANGELOG file in the download page) And keep these bug reports coming!

Update (10/1/07): A few more minor bugs have been fixed in version 2.2.4. Thanks for the bug reports!

Download version 2.2.4 of the YUI Compressor

Posted in Web Development | 27 Comments

Trimming comments in HTML documents using Apache Ant

This short article, explaining how to trim unnecessary code (comments, empty lines) from HTML documents, is a follow-up to an article published a couple of weeks ago on this blog: Building Web Applications With Apache Ant. Basically, the idea is to use Ant’s optional replaceregexp task as shown below:

<target name="-trim.html.comments">
    <fileset id="html.fileset"
        includes="**/*.jsp, **/*.php, **/*.html"/>
    <!-- HTML Comments -->
    <replaceregexp replace="" flags="g"
        match="\<![ \r\n\t]*(--([^\-]|[\r\n]|-[^\-])*--[ \r\n\t]*)\>">
        <fileset refid="html.fileset"/>
    <!-- Empty lines -->
    <replaceregexp match="^\s+[\r\n]" replace="" flags="mg">
        <fileset refid="html.fileset"/>

Update: Use this code very carefully as it is dangerous territory (Thanks to my co-worker Ryan Grove for pointing out some of the shortcomings)

Posted in Web Development | 6 Comments

YUI Compressor Version 2.2 Now Available

This new version of the YUI Compressor supports stdin and stdout. This means that you can now call the YUI Compressor using the following command line:

java -jar yuicompressor-2.2.jar --type js < input.js > output.js

You can still use the following syntax as well:

java -jar yuicompressor-2.2.jar -o output.js input.js

This has three main consequences:

  1. All informational and error messages are now printed to stderr.
  2. If no input file is specified, the YUI Compressor defaults to stdin. In that case, you must specify the --type option because the YUI compressor has no way of knowing whether it should invoke the JavaScript or CSS compressor (there is no file extension to look at)
  3. If no output file is specified, the YUI Compressor defaults to stdout (in prior versions, it used to create a file named after the input file, and appended the -min suffix)

The other main feature brought by this new version of the YUI Compressor is the support for JScript conditional comments:

   /*@if (@_win32)
      document.write("OS is 32-bit, browser is IE.");
   @else @*/
      document.write("Browser is not 32 bit IE.");

Note that the presence of a conditional comment inside a function (i.e. not in the global scope) will reduce the level of compression for the same reason the use of eval or with reduces the level of compression (conditional comments, which do not get parsed, may refer to local variables, which get obfuscated) In any case, the use of Internet Explorer’s conditional comments is to be avoided.

Finally, a few improvements have been made to the CSS compressor.

Download version 2.2 of the YUI Compressor

Posted in Web Development | 12 Comments

Building Web Applications With Apache Ant

The importance of having a solid build process

Modern web applications are large and complicated pieces of engineering, using many different technologies, sometimes deployed on hundreds (or even thousands) of servers throughout the world, and used by people from dozens of locales. Such applications cannot efficiently be developed without relying on a solid build process to do all the dirty and repetitive work of reliably putting all the pieces together.

Apache Ant

Apache Ant

Many tools (make, gnumake, nmake, jam, etc.) are available today to build applications. However, when it comes to building a web application, my personal favorite is definitely Apache Ant (here is a good explanation of Ant’s benefits over the other tools) This short tutorial will assume you already have some basic knowledge of Ant (if you don’t, you should flip through its user manual beforehand) This article will focus mainly on building front-end code using Ant.

Build types

It is often useful to build an application differently at different stages of its life cycle. For instance, a development build will have assertions and tons of logging code, while a production version will be stripped of all that. You may also need some intermediate build to hand off to QA, and still be able to debug things easily while running actual production code. With Ant, you can create one target per build type, and have a different dependency list for each target. Properties can also be used to customize specific build types. Here is an example:

<target name="dev" depends=", js.preprocess, js.check.syntax, copy.jsp, copy.image.files, copy.css.files, copy.js.files, compile.jsp,, compile.webapps, copy.libs"
    description="Development build">
<target name="prod" depends=", js.preprocess, js.check.syntax, js.concatenate, js.minify, copy.jsp, copy.image.files, copy.css.files, copy.js.files, compile.jsp,, compile.webapps, copy.libs"
    description="Full production build">
<target name="">
    <property file=""/>
<target name="" depends="">
    <property name="js.preprocess.switches" value="-P -DDEBUG_VERSION=1"/>
    <property name="js.compressor.switches" value="--nomunge --line-break"/>
<target name="" depends="">
    <property name="js.preprocess.switches" value="-P -DDEBUG_VERSION=0"/>
    <property name="js.compressor.switches" value=""/>

Concatenate your JavaScript and CSS files

Concatenating JavaScript and CSS files contributes to making your site faster according to Yahoo!’s Exceptional Performance team. File concatenation using Ant is trivial using the concat task. However, it is often important to concatenate JavaScript and CSS files in a very specific order (If you are using YUI for example, you want yahoo.js to appear first, and then dom.js/event.js, and then animation.js, etc. in order to respect module dependencies) This can be accomplished using a filelist, or a combination of filesets. Here is an example:

<target name="js.concatenate">
    <concat destfile="${build.dir}/concatenated/foo.js">
        <filelist dir="${src.dir}/js"
            files="a.js, b.js"/>
        <fileset dir="${src.dir}/js"
            excludes="a.js, b.js"/>

It is also possible to prepend some comments to the destination file, which is often used for license and copyright information, using a nested header element (see the Apache Ant manual) Just keep in mind that minifying your code will make these comments go away, so you may want to do this later in the build process.

Preprocess your JavaScript files using CPP

Preprocessing JavaScript code is very useful to facilitate the development process. It allows you to define your own macros, and conditionally compile blocks of code. One easy way to preprocess JavaScript code is to use the C preprocessor, which is usually installed by default on UNIX machines. It is available on Windows machines as well using cygwin. Here is a snippet of an Ant build.xml file illustrating the use of cpp to preprocess JavaScript files:

<target name="js.preprocess" depends="js.concatenate">
    <apply executable="cpp" dest="${build.dir}/preprocessed">
        <fileset dir="${build.dir}/concatenated"
        <arg line="${js.preprocess.switches}"/>
        <mapper type="identity"/>

You can now write JavaScript code that looks like the following:


function assert(condition, message) {
#define ASSERT(x, ...) assert(x, ## __VA_ARGS__)
#define ASSERT(x, ...)


#include "include.js"
function myFunction(arg) {
    ASSERT(YAHOO.lang.isString(argvar), "arg should be a string");
    YAHOO.log("Log this in debug mode only");

Just a word of caution here: make sure you use the UNIX EOL character before preprocessing your files. Otherwise, you’ll have some issues with multi-line macros. You may use the fixcrlf Ant task to automatically fix that for you.

Minify your JavaScript and CSS files

Minifying your JavaScript and CSS files will help make your application faster according to Yahoo!’s Exceptional Performance team. I warmly recommend you use the YUI Compressor, available for download on this site. There are two ways to call the YUI Compressor from Ant. Here is the first one using the java task:

<target name="js.minify" depends="js.preprocess">
    <java jar="yuicompressor.jar" fork="true">
        <arg value="foo.js"/>

Here is another way using the apply task that allows you to pass several files to the compressor using a fileset:

<target name="js.minify" depends="js.preprocess">
    <apply executable="java" parallel="false">
        <fileset dir="." includes="foo.js, bar.js"/>
        <arg line="-jar"/>
        <arg path="yuicompressor.jar"/>
        <mapper type="glob" from="*.js" to="*-min.js"/>

Update: Starting with version 2.2.x of the YUI Compressor, the ant target described above needs to be slightly modified:

<target name="js.minify" depends="js.preprocess">
    <apply executable="java" parallel="false">
        <fileset dir="." includes="foo.js, bar.js"/>
        <arg line="-jar"/>
        <arg path="yuicompressor.jar"/>
        <arg line="-o"/>
        <mapper type="glob" from="*.js" to="*-min.js"/>

Also consider the following ant target to minify CSS files:

<target name="js.minify" depends="js.preprocess">
    <apply executable="java" parallel="false">
        <fileset dir="." includes="*.css"/>
        <arg line="-jar"/>
        <arg path="yuicompressor.jar"/>
        <arg line="--line-break 0"/>
        <arg line="-o"/>
        <mapper type="glob" from="*.css" to="*-min.css"/>

Work around caching issues

Adequate cache control can really enhance your users’ experience by not having them re-download static content (usually JavaScript, CSS, HTML and images) when coming back to your site. This has a downside however: when rev’ing your application, you want to make sure your users get the latest static content. The only way to do that is to change the name of these files (for example by adding a time stamp to their file name, or using their checksum as their file name) You must also propagate that change to all the files that refer to them.

The copy and file name replacement will be handled by a custom Ant task named FileTransform (see The first step is to build the task from the source and define the custom task:

<target name="" depends="">
    <mkdir dir="${build.dir}/tools/classes"/>
    <javac srcdir="tools/src"
            <pathelement />
    <taskdef name="FileTransform"

(Note the use of a “-” in front of the name of the target. This is used to make the target “private” i.e. not invokable directly) The second part is the copy of the static content that is going to be cached using the newly defined FileTransform task:

<target name="-copy.js.files" depends="">
    <mkdir dir="${build.dir}/js"/>
    <FileTransform todir="${build.dir}/js"
        <fileset dir="site/js" includes="*.js"/>

The mapping between the old names and the new names is stored in a properties file. The final step is the copy of the files that refer to files which name was changed in the previous step. For this, we use the copy task:

<target name="-copy.php.files" depends="-copy.js.files">
    <mkdir dir="${build.dir}/php"/>
    <copy todir="${build.dir}/php">
        <fileset dir="site/php" includes="*.php"/>
            <filtersfile file="${build.dir}/"/>

You can download an archive containing a very simple Ant project illustrating this advanced technique.

Deploy your application

Deploying a web application is usually done by copying a set of files over to the production servers and running a list of commands on those servers (stop the services, run a few shell commands, and finally restart the services)

You have many options to copy files to remote servers with Apache Ant. You could use the copy task to copy files to a locally mounted file system, or you could use the optional FTP task. My personal preference is to use scp and rsync (both utilities are available on all major platforms) Here is a very simple example demonstrating the use of scp with Apache Ant:

<apply executable="scp" failonerror="true" parallel="true">
    <fileset dir="${build.dir}" includes="**/*"/>
    <arg line="${live.server}:/var/www/html/"/>

And here is an example showing how you can run remote commands:

<exec executable="ssh" failonerror="true">
    <arg line="${live.server}"/>
    <arg line="sudo webctl restart"/>


I hope this article has given you some ideas of what’s possible to automate with modern build tools such as Apache Ant (other cool things include automatic generation of documentation, source control integration, packaging using rpm and such, etc.) My professional experience has taught me that the build process cannot be an afterthought. Just like performance, it should be part of your project from the very beginning. The second lesson is that all web developers need to be active participants in creating and maintaining the build process (with maybe one build engineer to centralize all the information) instead of relying on somebody else to build their component. Finally, keep your build process alive and maintain it actively as your needs will change during your project life cycle.

Posted in Web Development | 25 Comments