170
I Use This!
Very Low Activity

News

Analyzed about 20 hours ago. based on code collected 2 days ago.
Posted over 15 years ago by dylan
We've announced a new project, the Dojo Extensions for Adobe AIR. This project makes it easier to use Dojo for higher level desktop application features within AIR. Currently this lives as a project on Google Code, under the BSD license. If you ... [More] have signed a Dojo Foundation CLA or CCLA, we will accept your contributions. A longer term goal that we have, if there is interest, is to create a Dojo Desktop project that would provide similar convenience methods for native OS X and iPhone apps (using embedded WebKit Cocoa-JS bindings or something like PhoneGap), Prism, and more. [Less]
Posted over 15 years ago by jaredj
While looking at some Adobe technology, I came across James Wards site. James is a Technical Evangelist for Adobe. One of the links he has on his site is a Flash based performance application that "walks you through benchmarks for various methods ... [More] of loading data in RIAs." The data was used at a conference to show the Adobe performance vs Dojo and general Ajax/JSON. Note: If you try it yourself, I had problems using Firefox 3.0.x with his Flash App, so I recommend using Firefox 2.0 It's an Adobe Flex application that compares the various load times for: Ajax HTML - 5000 rows Ajax JSON - 5000 rows Dojo - 1000 Rows Flex E4X 5000, 20000 rows. Visually, it displays a line graph showing the server exec time, transfer time, parse time, and the render time. Very useful stuff in a good layout. What struck me, though, was the performance claim for Dojo and Ajax/JSON in general . Dojo was shown to be so slow, that he could only load 1000 rows and JSON was way too inefficient compared to AMF (Adobes Action Message Format). What's up with that? Having been a dojo developer for some time now, I’ve never had the performance issues in my apps that the numbers the Census app claimed for various Ajax techniques. So, in the parlance of Mythbusters, lets dig a little deeper … My test configuration: Before I begin my dive into James’ application, I’ll go through the machines I ran my tests on for a reference point. Naturally the power of a machine will affect the results of the test, so it’s only fair to disclose all of this up front: Ajax Client: Lenovo ThinkPad Z61P, Core2 DUO @ 2.0 GHZ, 2GB RAM. Windows XP 32 bit SP3. Firefox 2 Internet Explorer 7 Opera 9.61 Safari 3.1 Google Chrome Beta Server Side: (For my JSON generation services and DB access used in server generated JSON performance tests) DELL Workstation powered by an Intel Xeon EM64T 3.4 Ghz CPU, 2GB RAM 64 bit Linux running kernel level: 2.6.5 64 bit WebSphere Application Server, Version 6.1.0.15 32 bit Apache HTTP Server V2.0.47 Where possible, I’ve included the source code of parts of my tests. I hope to include even the server side pieces (basic servlet using SQL Datasource) in the future. First off, I didn’t look at James’ application code. It is covered under GPL and since I am a direct contributor to Dojo as well employed by another large company, I cannot be source or algorithm contaminated, or even gives the appearance of misappropriation of code. So, all my analysis was done using James’ application as a black box and just looking at the types of requests it makes and the data returned. To begin, his version of Dojo is ancient. He's using Dojo 0.4.3 as a comparison. All I have to say to that is OW! That is an extremely old version of Dojo and was well known to have performance issues. One of the major goals of Dojo 1.0 during the re-architecture of the library was to fix those performance issues. This included replacing the older O(n^2) filtering table with a virtual grid that could handle far, far, more rows efficiently by only paging in what rows were in view and only rendering what rows were in view. While Dojo 1.x will handle a 1000 tables, just as a principal of client side architectures, you rarely, if ever want to send a large table down to the client to parse. There are a couple reasons for that statement: No user can view 1000 rows at the same time, so why bother sending all the data in one shot? It often just wastes bandwidth and rendering time. Sorting large data sets in the client isn’t often efficient … or often good at keeping the order correct when compared to other pages in the overall set of data. Sure, you can sort 0-5000 rows … but what happens when you have 100,000, or worse, 1,000,000 rows you need to sort across? What if your client isn’t a computer with lots of memory (think handheld device like a Blackberry or iPhone)? Huge data sets just aren’t feasible to sort in a wide variety of clients. So, in most cases it’s better to leave the sorting to the programs designed for fast sorting and data lookups. Or to put it simply, let the database or service handle the sorting; the client should only be concerned about displaying it. A database is designed to sort 100,000 rows, a web browser in a mobile device isn’t. Dojo 1.X took those considerations into mind when developing the dojo.data API. It’s an abstraction later for accessing data services so that the user of a data store doesn’t have to know where sorting and the like occurs (it leaves it up to the data store implementation to decide.). Dojo itself provides several stores that can read data in various formats and expose them in a common way. Some are completely executed in the client; some use a client/server service model for accessing data. To data bound widgets like Grid, it isn’t even known where the sorting occurs. It asks for a page of data with X ordering applied and the store hands it back. It’s highly likely for huge data sets the data store is just making a call to a database service to sort and hand back the page. But to avoid going further off in a tangent on dojo.data, the point is Dojo 1.X tries to be a lot smarter about where certain actions occur so that they are handled in the most efficient manner possible. JSON vs AMF: The more interesting comparison is JSON vs AMF. AMF began its life as a propriety protocol from Adobe and is a similar concept along the lines of Java's object serialization. It's a binary format that relies on strongly typed definitions of the data structures. Adobe has recently open sourced the specification in an attempt to gain wider acceptance. JSON, being a loosely typed language subset, includes structure details of how the data is represented. JSON as an efficient transport: It's the loosely typed structure of JSON that gives the developer a lot of power and flexibility in how to represent their data. It also provides a lot of opportunity at inefficiencies which may be the case of James's demo. First off, JSON can be generated that compresses close to or just as well as AMF under GZIP. In fact, JSON should generally compress better, percentage wise of the original structure size than AMF. The reason for that gets into lossless compression theory and what is called the ‘entropy’ of your data, but that’s honestly a really dull subject and isn’t worth going into here. Think of it this way, text often compresses better than binary since text will tend to have lots of repeated sequences can be code-indexed, whereas binary tends to be more random and is harder to build long sequences of like data to be code-indexed. The other thing I’ve heard about AMF is that for smaller data sizes, the JSON equivalent will be smaller. In fact, I’ll be taking a look at that in a bit. My basic understanding on why it is smaller is that AMF can extract out structural details into a header automatically, where JSON encodes it within the actual object constructs. In either case, for AMF or JSON, if you’re sending a large amount of data you should always try to compress it using GZIP or DEFLATE filters if the client can support those encodings. This will drastically reduce ‘on the wire’ times. It was great to see that the Census App was already doing GZIP compression as a standard operation for both its Ajax/JSON and AMF example tests. Now, you might be wondering since AMF likely encodes object structure details in its header automatically, will it always be smaller than JSON for large data sets? The short answer to that is no, it doesn’t have to be. Remember where I said JSON is flexible? Well, this is the point where its flexibility becomes really useful in reducing the size of a JSON payload. So … let’s use JSON flexibility and see how we can improve a JSON payload size! … and even better, lets even use the Census App data as our starting point so it’s readily obvious the compressed payload size can be reduced. Using Firebug, I was able to get the URL the application called to when it was loading its 5000 rows of JSON data. For reference, it’s: http://www.jamesward.com/census/servlet/CensusServiceServlet?id=ajax_json&command=getJSON&rows=5000&gzip=true First thing to notice is that all the rows are homogeneous. Meaning, all the rows have the same attributes. Homogeneous data can encode format information in the header of the JSON data instead of being redundant in each object. This can be done by converting all the JSON objects into Arrays, so that the items property is just an array of arrays of values, where the index of the array maps to some string name in a ‘cols’ array. See the following for clarification { "cols": ["field0","field1","field2", ...}, "items" : [ ["field0Val", "field1Val", ....], ... ] } So, if we take the payload from the servlet and apply this formatting we get: GZIP JSON for that payload: 38.1 K. AMF is now only 78% the size of JSON at 5000 rows. Good improvement! But, we can still do better… Data with only finite values can be represented as integers instead of strings. Or effectively, represent finite values as enumerations instead of a String type. Again, their data provides places where this can be neatly applied. For example, look at the ‘sex’ field. This can only have values of "Male" or "Female". So, why not represent those as numbers in JSON instead? For example, represent "Male" as 0, and female as 1. This will encode each field as one byte instead of six. For more generally, You can consider this optimization using a formatting of: { "cols": ["field0","field1","field2", ....}, "enums": { "field0": ["enum1", "enum2",...]}, "items" : [ ["field0Val", "field1Val", ...], ... ] } Okay, now GZIPing the payload with this optimization applied. GZIP JSON for that payload: 37.2 K. AMF is now only 81% the size of JSON at 5000 rows. Great! More size improvement. It’s not as drastic as the first one, but it’s still a few percent. Graphically, applying optimizations to the JSON payload shows the following trends in size reduction: Figure 1: JSON GZIP Size Versus Applied Optimizations: [inline:Size.jpg] We certainly go on with the optimization hunt for James's JSON data, but I think the point has been made: JSON’s flexibility grants you the ability to morph your wire format to improve efficiency of the transfer for your specific application. As demonstrated, with only a little adjustment, I was able to reduce James’ application data to within seven kilobytes of AMF at 5000 rows. Could it be reduced further by enumerating out more common fields? Yes. But … lets go back to a comment I made earlier. Remember were I said that AMF was larger than JSON for smaller payload sizes? Well, now is where I can prove it and all by using James’ application. Using Firebug again, I was able to identify the request that was generating AMF. The URL is: http://www.jamesward.com/census/flex_amf3.html?rows=5000&gzip=true Great, it even let me specify the row count, just like the JSON URL would. So … we can explore payload sizes. But, this turned out to be a little harder than I first expected, because that URL is only one of three requests the application must make to get row data. Yep, that’s right, where the JSON service is a single request, the AMF service is actually three requests. You can see that below in my screenshot from a debugging proxy: [inline:debuggingproxy.jpg] I should point out here that his application doesn’t seem to include the transfer times and sizes of the intermediary data sent. Should it? Probably, as it’s all necessary information for his AMF/Flex application to render. But, that’s neither here nor there at this point. I said we would look at the data payload sizes, so that will be the focus. To get the AMF payload size I had to use the debugging proxy to locate each data package as I scaled it from 0 rows up to 12400 rows. The results were interesting. Figure 2: AMF versus JSON payload size: [inline:SizeVersusRowsN.jpg] Okay, so it does look like up to roughly 200 rows, the payloads are pretty much the same size. But, are they really? Given the scaling on the Y axis, it’s hard to see the differences in payload size until 400 or so rows. So … let’s adjust the Y axis a bit so that it’s not a linear scale. Figure 3: AMF versus JSON payload size (logarithmic): [inline:SizeVersusRowsL.jpg] Aha! Okay, now we can really see the difference. Up to about 80 to 100 rows, the JSON payload is actually smaller than AMF for the data. Why? Remember when I said AMF likely encodes information in the headers about the structure of the object? Well, in smaller payloads, that information actually ends up larger than the actual data being sent. Simply put, its overhead is larger than JSON’s for smaller data sizes. This also implies something else … that JSON would be a better format for paging data (data in more consumable chunks, such as 50 rows at a time, which is about the size of a printed page) than AMF. And remember! This is James’ data without any of the optimizations I went through earlier. I could probably make the JSON payloads even smaller by applying those changes. But, this analysis has gone on long enough as it is and there is another topic I want to bring up with regards to his application and the times it shows. So … moving right along to the next topic. JSON takes longer to parse? Wow...In James's demo, he claimed that it took Firefox 2 1.3 seconds to parse the JSON data. Maybe he's right - using Dojo 0.4.3 or some other parser format … or maybe he’s somehow including the wire transfer time in it … but regardless, that number seems rather large and went against all my experiences on how fast JSON parses. So … once more into the Myth busting! As with my previous tests I took the data output from James’ servlet and stored it on an Apache server with GZIP enabled. This was to mimic server request and gzip. From there I made a very simple HTML page that just used dojo’s fromJson() to time how long it takes the browser to parse the JSON text back into a JavaScript object. In fact, the code I used can be seen below: <head> <script type="text/javascript" src="dojo.js"         djConfig="isDebug: true, parseOnLoad: true, usePlainJson: true"></script>   <script type="text/javascript">   function getGridData() {     dojo.xhrGet( {         url: "test.json",         handleAs: "text",         timeout: 60000,         load: function(response, ioArgs){                         var sTime = new Date().getTime();                         var obj = dojo.fromJson(response);                         var eTime = new Date().getTime();                         var delta = eTime - sTime;                             alert("Total processing time: " delta/1000);         },              error: function(error,args){                            console.warn(error);         }     }); } dojo.addOnLoad(getGridData);   </script> <style type="text/css"> </style> <title>DojoGrid</title> From there I just loaded that page into a variety of web browsers. In each one I ran it four times and cleared cache between each run. The results got were far, far better than the James’ application claim of 1.3 seconds (in the same Firefox browser). And even more interesting was that I got a result for one browser I didn’t expect. Please see the following chart. Figure 4: JSON Parse time for Various Browsers Versus James’ Application claim: [inline:ParseTime.jpg] Dojo's 1.2.1 version of dojo.fromJson() to parse the same JSON in the same browser took 0.187 or so on average in Firefox 2.0. That looks much, much better than 1.3 seconds (the black bar), doesn’t it? The biggest surprise was Internet Explorer 7 actually had the best JSON parse time out of all of them at 0.078 seconds. Now that I saw a much better parse time than his test app claimed … I looked a little harder at it and noticed something that seemed strange to me. Consistently the transfer time for an AMF payload was much longer than the JSON payload according to his app. Wait, doesn’t that conflict with the statement his app made about how AMF is smaller than JSON? If AMF is indeed smaller, then transfer (on the wire), time for it should be less than JSON, shouldn’t it? That piqued my deductive interest as that goes against what would be expected! So … I did some further analysis of his numbers… The application claimed that AMFs claimed parse time is 0 seconds. Okay, seems a bit strange, but parse time could just be really, really, fast. But … I have another theory that explains the parse time being zero and why AMF transfer time is longer: My Theory: The stated AMF transfer time is in reality the AMF transfer and parse time. I believe that what his app calculated for the transfer time includes the parsing and object load time the Flex engine goes through with an AMF stream because I suspect he has no real way to get the parse time separate from the transfer (wire only), time, so his graph includes them all as transfer time and leaves parse time at 0. Now, I may be wrong with that deduction, but consider the following: If we take his number for JSON transfer (wire) time and add it to the JSON parse time I calculated, we get a very interesting value … around 0.7 seconds. This value is interesting because it almost identical to the AMF transfer time, which was shown in my run as 0.874 seconds! So, if I am correct in my deductions … this would mean the transfer and load time for AMF versus JSON is roughly the same. That seems reasonable as the data is roughly the same size and both Flex and JavaScript are virtual machine interpreters. Now, obviously when you run the same tests you may see some number variations. Everything is affected by the speed of the machine you run the test on, the browser version, the Flash version, and so on. The key thing to look at is not the numbers separately, but the numbers relative to each other on the same system. Okay, so, this is getting long. Please bear with me, I have just one more topic I want to cover. Server generated JSON takes longer to create than server generated AMF? In James's demo, I didn't see any real context of how he was creating his JSON data on the server side. Maybe it was hard-coded, or maybe he was using an open source JSON library, not certain since I did not look at his code to avoid contamination concerns. The test I did took an Apache Derby database containing 100,000 rows and took the results and used JSON4J to convert it to a JSON data stream. The dataset contained columns similar to the ones they used. I did not see anything close to server generation time they claim of 2.1 seconds. The JSON parser and generation library I used was the JSON4J library out of the WebSphere Application Server Feature Pack for Web 2.0. The server side was WebSphere Application Server, version 6.1. What I found in my simple Datasource (JDBC), servlet that I found that doing a basic SELECT * FROM PEOPLE on my table and returning the first 5000 rows in JSON (Using the JSON4J parser), I saw server generation time of: 0.255 seconds. This is much closer to their AMF time of: 0.078 seconds. I decided to go a bit further and try to see what just the JSON serialization cost. So I actually removed generating JSON. I wanted to see how much server time was spent just iterating over the database rows and accessing fields. What I determined was that it took the server 0.100 seconds just to iterate over the 5000 rows, but not actually serialize anything! This is greater than their claimed time for DB query, load AMF data objects, and serialize. And using that number of 0.255 seconds and the DB iteration time of 0.100 seconds, I can deduce the JSON server overhead time is about 0.155 seconds using the JSON4J utility library. Still nowhere near the 2.1 seconds his application claimed it took his server to generate JSON. So … the numbers raise the question of what the Flex application is doing for AMF generation up to serialization of the data on the server side. I can’t see how they could be using the same DB queries and the like and the only difference is in serialization to have such huge differences in numbers. Anyway, that question remains open. I’m not sure how to continue on further examination of the server side claims without knowing more about what his application did for AMF generation, and I can’t look at that because of contamination concerns. So, if anyone can shed some light on what they’re doing and how it can show faster numbers than even a simple DB row traversal, I would love to know. In summary: In closing, I would like to say that this investigation has confirmed a few things for me: The first is that Flash and Flex are good technologies and hold up very well to large data sets and use efficient formats. Adobe has done a great job in designing them. My intention with this post was not to attack Flash, only some of the claims made about its superiority to Ajax techniques. I just didn’t feel that James’ Census application gave a completely fair view of how Ajax/JSON can be used well in applications. The second thing it confirmed for me is that while Flash and Flex are nice tools, you don’t need them to create well-performing Ajax/JSON applications. As my analysis showed, through good construction of your data payloads, you can get even large payload size close to that of a binary protocol such as AMF. And if you’re doing paging style access, well … smaller payloads are more size efficient in JSON than they are in the AMF binary format. Lastly, it showed me through my work on a simple server side servlet that just issued SQL queries and returned the data as JSON is that server-side JSON rendering can be very fast. It’s certainly not the horrible performer James’ app tries to make it out to be. So, if you take anything from this, I hope it is better understanding that Flash and Flex are good tools, but the Open Web (Ajax), techniques can be just as good and in reality, just as efficient. [Less]
Posted over 15 years ago by dylan
Dojo rarely wins popularity polls, and I've often complained about the lack of science behind polls. The reasons can be explained for a variety of reasons, but with the latest results from the O'Reilly InsideRIA poll, Which AJAX framework do you ... [More] currently prefer?, "with almost 50% of the vote, Dojo is the clear winner." Dojo has made a tremendous amount of progress this year, and we're going to continue working hard to make Dojo a better toolkit for you. [Less]
Posted over 15 years ago by haysmark
Learn how to write acceptance tests and test "user stories" with your applications using the new doh.robot module, dijit.robotx, for Dojo 1.2, without modifying your application code or even upgrading your version of Dojo. Introduction If you ... [More] followed my previous blog post on doh.robot, you have a general idea of the challenges of testing Web UIs and how doh.robot fills these needs. If not, I strongly suggest that you read it first because this blog post assumes that you have basic knowledge of the doh.robot API. Acceptance testing The last blog post describes methods for unit testing: it assumes that you are perfectly ok with modifying the test page to contain DOH test code. But what if you are testing application code, say during an acceptance test phase, and you absolutely can't modify your application code? Or what if you are using doh.robot for accessibility testing and you want to test the tab order of your *application* and not the tab order of some insignificant unit test? The methods described in the previos post just won't work for you: not only would you have to upgrade to Dojo 1.2 to use the doh.robot code, but you would also have to insert test code into your application logic, which is bad. What you really want is a test framework that can run in the background and won't interfere with your application code. Clicking links The previous blog also assumed that your tests are constrained to one page. What if you need to write a test that clicks a link or a form submit button? This is a very common requirement for testing Web applications: your customer gives you user stories, scenarios an end-user might face while visiting your Web site. The user is naturally going to click links that change the page. But all of the examples you have seen so far of the DOH test framework assume that the DOH framework lives in the Web page and is destroyed when the page changes. You might wonder how to keep the DOH test framework running even as the the robot navigates away from the page that DOH first loaded. What dijit.robotx can do for you dijit.robotx is a new module for DOH included in Dojo 1.2 that can load an arbitrary application and run automated doh.robot test scripts on the application environment. This serves two purposes: It enables you to execute automated tests on release candidate builds of your applications, with no modifications to your application. It enables you to write long-lived tests that can smartly cross page boundaries and continue execution. This is huge. Whereas with the plain doh.robot you had to insert test code into your application code, now with dijit.robotx you can keep your test code somewhere else. And whereas with doh.robot you had to embed test code into every page that the user story visited to ensure that the robot kept moving, now with dijit.robotx you can write the entire user story into just one file that spans any number of page changes in the user story. And whereas with doh.robot you had to upgrade your application to Dojo 1.2 to take full advantage of the robot's features, with dijit.robotx you can test any Web application with zero modifications, irrespective of the AJAX framework the application uses. The dijit.robotx API The dijit.robotx include mixes in two functions:doh.robot.initRobot() and doh.robot.waitForPageToLoad(), into the doh.robot namespace, which exactly map to the two features listed above. doh.robot.initRobot() You use initRobot() to load an application for testing. Here is the syntax: initRobot: function(/*String*/ url){         // summary:         //            Opens the application at the specified URL for testing, redirecting dojo to point to the application environment instead of the test environment.         //         // url:         //            URL to open. Any of the test's dojo.doc calls (e.g. dojo.byId()), and any dijit.registry calls (e.g. dijit.byId()) will point to elements and widgets inside this application.         // } When you call initRobot, the browser loads the application into a frame and points the test's Dojo context to the frame's content. This means: The global variable dojo.doc will point to your application's document. Functions part of Dojo, like dojo.byId(), will fetch elements from your application's context. If you application uses Dijit widgets, the test script will use the application's Dijit registry, so dijit.byId will point to widgets in your application. Standard global variables, like window and document, will point to the test script's environment, not the application environment. You will only be able to assign variables their values once the tests execute. I stress the last point. initRobot returns immediately, before your application is finished loading. If you create variables outside of the scope of a test block and try to assign them values or DOM elements from your applicaiton, they will all be invalid, because the application hasn't loaded yet. So what do you do? Declare your variable names like you normally would, but don't assign them values yet. Instead, make your first test assign the values. That way, you are guaranteed that your application's environment is available. Example Here is an example of a test that uses initRobot. The test is interacting with a completely separate page consisting of three dijit.Spinner widgets, residing here: http://archive.dojotoolkit.org/nightly/checkout/dijit/tests/form/test_Sp... Notice that there is no robot code in the page that the robot is testing. Now here is the separate test script that is automating that page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"                 "http://www.w3.org/TR/html4/strict.dtd"> <html>         <head>                 <title>doh.robot Spinner Test</title>                 <style>                         @import "../../../../util/doh/robot/robot.css";                 </style>                 <!-- required: dojo.js -->                 <script type="text/javascript" src="../../../../dojo/dojo.js"                         djConfig="isDebug: true, parseOnLoad: true"></script>                 <script type="text/javascript">                         dojo.require("dijit.dijit"); // optimize: load dijit layer                         dojo.require("dijit.robotx"); // load the robot                         dojo.addOnLoad(function(){                                 // declare variables but do not assign them values                                 var spin1;                                 var spin2;                                 var spin3;                                 var safeClick;                                 var delta=1; // redefine with doh.robot.mouseWheelSize when it is available                                 // the initRobot call goes here                                 doh.robot.initRobot('../test_Spinner.html');                                 doh.register("setUp",{                                         name: "setUp",                                         timeout: 15000,                                         setUp:function(){                                                 // assign variables HERE                                                 spin1=dijit.byId('integerspinner1');                                                 spin2=dijit.byId('integerspinner2');                                                 spin3=dijit.byId('realspinner1');                                                 safeClick=dojo.byId('form1');                                         },                                         runTest: function(){                                                 // assert onChange not fired                                                 doh.is("not fired yet!",dojo.byId('oc1').value);                                                 doh.is(1,spin1.smallDelta);                                                 var s=": 900\n"                                                 "integerspinner1: 900\n"                                                 ": not fired yet!\n"                                                 ": 1,000\n"                                                 "integerspinner2: 1000\n"                                                 ": \n"                                                 "integertextbox3: NaN\n"                                                 ": 1.0\n"                                                 "realspinner1: 1\n";                                                 doh.is(s, dojo.doc.displayData().replace(/[a-zA-Z0-9_]*_displayed_/g, ""));                                         }                                 });                                 doh.register("arrowButton",{                                         name: "spinner1_invalid",                                         timeout: 15000,                                         runTest: function(){                                                 // assert invalid works                                                 var d=new doh.Deferred();                                                 doh.robot.mouseMoveAt(spin1.focusNode,500);                                                 doh.robot.mouseClick({left:true},500);                                                 doh.robot.sequence(function(){                                                         spin1.focusNode.value="";                                                 },500);                                                 doh.robot.typeKeys("0.5",500,300);                                                 doh.robot.sequence(function(){                                                         try{                                                                 doh.is(false,spin1.isValid());                                                                 d.callback(true);                                                         }catch(e){                                                                 d.errback(e);                                                         }                                                 },500);                                                 return d;                                         },                                         tearDown:function(){                                                 spin1.attr('value',1);                                         }                                 });                                 // ... some more tests                                 // all tests registered; notify DOH                                 doh.run();                         });                 </script>         </head> See it in action/view the full source code: http://archive.dojotoolkit.org/nightly/checkout/dijit/tests/form/robot/t... The test consists of 5 steps: 1. The test declares variables spin1-3, to store convenient references to the Spinner widgets when the application loads. 2. The test calls initRobot, passing the URL of the page it wants to test. 3. The test registers a setUp test to assign the variables spin1-3 their values. Note that you are not required to have a test named setUp; this is just a sensible name for a test whose purpose is to assign variables their values. 4. The test registers any number of DOH tests, such as the "spinner1_invalid" test here, as usual. The test assumes that it is executing in the context of the application. 5. The test calls doh.run() to tell DOH that all tests are registered. When your external application loads and DOH receives the doh.run() call from the test script, DOH begins executing your tests on the application. Digression: cross-domain security The initRobot call in the above example loads an application that resides on the same server. If your testing requirements enable you to stash your tests on the same server as your application, then this works just fine for you. But what if you absolutely have to test an application residing on a different domain? If you just throw the URL at initRobot, initRobot will faithfully load the application at the URL, but the browser will deny DOH access to the application's content. In this scenario, you have two options: - Run the browser in trusted mode (firefox -chrome command line flag, mshta instead of IE) - Trick the browser into thinking that the application and test script are running on the same server One possible implementation of to the second solution is to create a simple reverse-proxy Web server. The reverse-proxy is an ordinary Web server than joins local files and remote servers. To browsers connecting to the reverse-proxy, the application files and test files appear to be on the same server! This is easy to implement. Suppose you have an application server running an application called Application at http://192.168.0.6:8080/Application/. Your test files sit on an Apache Web server at http://192.168.0.7/tests/Application/. To fix the cross-domain problem, you want requests by the test to the application to ask for http://192.168.0.7/Application/ instead of http://192.168.0.6:8080/Application/. In your httpd.conf, you add: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule rewrite_module modules/mod_rewrite.so <IfModule mod_rewrite.c> RewriteEngine on RewriteRule     /Application/(.*)    http://192.168.0.6:8080/Application/$1 [P] </IfModule> Now the reverse-proxy will silently route requests from http://192.168.0.7/Application/ to http://192.168.0.8:8080/Application/. You can write your initRobot call to load your application with this relative URL: doh.robot.initRobot('/Application/'); To load your tests, you still use the URL to your test server: http://192.168.0.7/tests/Application/, and the browser will think that your application resides on the same server, so doh.robot will work. By all means though, if your testing requirements enable you to physically put your test files on the same server as your application, go for it. waitForPageToLoad You can load an external application, so now you want to click links and open new pages within that application. Here is the syntax for waitForPageToLoad: waitForPageToLoad: function(/*Function*/ submitActions){         // summary:         //           Notifies DOH that the doh.robot is about to make a page change in the application it is driving,         //            returning a doh.Deferred object the user should return in their runTest function as part of a DOH test.         //         // description:         //           Notifies DOH that the doh.robot is about to make a page change in the application it is driving,         //            returning a doh.Deferred object the user should return in their runTest function as part of a DOH test.         //            Example:         //                  runTest:function(){         //                        return waitForPageToLoad(function(){ doh.robot.keyPress(dojo.keys.ENTER, 500); });         //                  }         //         // submitActions:         //            The doh.robot will execute the actions the test passes into the submitActions argument (like clicking the submit button),         //            expecting these actions to create a page change (like a form submit).         //            After these actions execute and the resulting page loads, the next test will start.         // } waitForPageToLoad takes a function called submitActions. The robot expects submitActions to contain the final instructions you want to execute on this page. For example, if you want to navigate away from the page by clicking a link, your submitActions function should contain doh.robot instructions that click the link. The DOH runner will wait while the robot is executing code in this block until it receives a page load event. When that happens, DOH loads the next test you registered and proceeds from there. waitForPageToLoad returns a Deferred object. The idea is that you can, in turn, return this Deferred object to DOH so that it knows to halt execution of further tests until the next page loads. Example The following sample uses waitForPageToLoad to test a user story for PlantsByWebSphereAjax, an application available in IBM WebSphere Application Server Feature Pack for Web 2.0. The user story flows like this: The user is looking to buy flowers on PlantsByWebSphereAjax. The user adds two flowers to the shopping cart. The user clicks checkout. When the next page loads (a login screen), the user logs onto the website. When the next page loads (a shipping info page), the user fills in the shipping info and credit card information to finalize the sale. PlantsByWebSphereAjax contains a shopping cart built on Dojo DnD. Users literally drag images of products into the shopping cart to select them for purchase. When the user is ready to check the items out, the user clicks the checkout button and the contents of the DnD container are submitted to the server-side logic for processing. In the following sample, the robot uses initRobot to load the application. In the test, the robot acts just like a user and drags an item into the shopping cart. The robot uses waitForPageToLoad to click the checkout button, triggering a page to a login page. After the login page appears, the robot fills in its credentials. The robot again uses waitForPageToLoad to click login. The robot fills in its address and credit card information and the test concludes. doh.robot.initRobot('/PlantsByWebSphereAjax/');         doh.register('user_story1',{                 name: 'selectitems',                 timeout: 60000,                 runTest: function(){                         var d = new doh.Deferred();                         // select a flower                         doh.robot.mouseMoveAt('dijit_layout__TabButton_1', 500, 1000, 47, 6);                         doh.robot.mouseClick({left:true, middle:false, right:false}, 1000);                         doh.robot.mouseMoveAt(function(){ return dojo.doc.getElementsByTagName('IMG')[15]; }, 8000, 1500, 58, 45);                         doh.robot.mouseClick({left:true, middle:false, right:false}, 1000);                         // add selected flower to cart                         doh.robot.mouseMoveAt(function(){ return dojo.doc.getElementsByTagName('BUTTON')[0]; }, 5000, 2000, 36, 15);                         doh.robot.mouseClick({left:true, middle:false, right:false}, 1000);                         // next page                         doh.robot.mouseMoveAt(function(){ return dojo.doc.getElementsByTagName('A')[15]; }, 1000, 2000, 12, 10);                         doh.robot.mouseClick({left:true, middle:false, right:false}, 1000);                         // drag flower into shopping cart                         doh.robot.mouseMoveAt(function(){ return dojo.doc.getElementsByTagName('IMG')[14]; }, 5000, 1000, 63, 75);                         doh.robot.mousePress({left:true, middle:false, right:false}, 1000);                         doh.robot.mouseMoveAt(function(){ return dojo.byId('shoppingCart'); }, 5000, 1000);                         doh.robot.mouseRelease({left:true, middle:false, right:false}, 1000);                         // assert price==$16                         doh.robot.sequence(function(){                                 if(/\$16/.test(dijit.byId('ibm_widget_HtmlShoppingCart_0').cartTotalPrice.innerHTML)){                                         d.callback(true);                                 }else{                                         d.errback(new Error('Expected string containing $16, got ' dijit.byId('ibm_widget_HtmlShoppingCart_0').cartTotalPrice.innerHTML));                                 }                         }, 1000);                         return d;                 }         });         // use waitForPageToLoad to click the checkout button         // tests will wait for the next page to load         doh.register('user_story1',{                 name: 'selectitems_pagechange',                 timeout: 60000,                 runTest: function(){                         return doh.robot.waitForPageToLoad(function(){                                 // click submit                                 doh.robot.mouseMoveAt(function(){                                         return dojo.byId('checkout_button');                                 }, 1623, 801);                                 doh.robot.mouseClick({left:true, middle:false, right:false}, 992);                         });                 }         });         // next page has loaded; continue executing tests         // in this case, the next page of the user story is a login page         doh.register('user_story1',{                 name: 'login',                 timeout: 60000,                 runTest: function(){                         // log user in                         var d = new doh.Deferred();                         doh.robot.mouseMoveAt(function(){ return dojo.byId('email'); }, 500, 1000);                         doh.robot.mouseClick({left:true, middle:false, right:false}, 500);                         doh.robot.typeKeys("username", 500, 5000);                         doh.robot.keyPress(dojo.keys.TAB, 500);                         doh.robot.typeKeys("password", 500, 5000);                         doh.robot.sequence(function(){                                 d.callback(true);                         }, 1000);                         return d;                 }         });         // use waitForPageToLoad to click the login button         doh.register('user_story1',{                 name: 'login_pagechange',                 timeout: 60000,                 runTest: function(){                         return doh.robot.waitForPageToLoad(function(){                                 // click login                                 doh.robot.mouseMoveAt(function(){ return dojo.doc.getElementsByTagName('input')[2]; }, 1623, 801);                                 doh.robot.mouseClick({left:true, middle:false, right:false}, 992);                         });                 }         });         doh.register('user_story1',{                 name: 'shippinginfo',                 timeout: 60000,                 runTest: function(){                         var d = new doh.Deferred();                         // fill out the shipping info form                         // you get the idea                         return d;                 }         });         doh.run(); The above code uses waitForPageToLoad twice: once to click the checkout button, and once to click the login button. In each waitForPageToLoad call, you pass a function containing commands that will change the page. Let's examine the first waitForPageToLoad call more closely: // use waitForPageToLoad to click the checkout button         // tests will wait for the next page to load         doh.register('user_story1',{                 name: 'selectitems_pagechange',                 timeout: 60000,                 runTest: function(){                         return doh.robot.waitForPageToLoad(function(){                                 // click submit                                 doh.robot.mouseMoveAt(function(){                                         return dojo.byId('checkout_button');                                 }, 1623, 801);                                 doh.robot.mouseClick({left:true, middle:false, right:false}, 992);                         });                 }         });         // next page has loaded; continue executing tests As you can see from the the above snippet, you use a waitForPageToLoad call as the return value of a test. No, the test doesn't actually test anything, but it is a convenient pattern to halt DOH while the page is changing. You give the test a long timeout so the page has sufficent time to load the next page. This is the *maximum* wait; test execution will resume immediately when the next page loads. You pass waitForPageToLoad a function containing robot commands that will do something to change the page. In this example, the robot moves the mouse to the checkout button. Then, the robot clicks the left mouse button on top of the checkout button, causing the application to submit the form and go to the login page. When the login page loads, DOH resumes test execution and executes the next test; in this case, the next test is named 'login' and so it executes. You can execute any number of tests after that, and can use waitForPageToLoad any number of times to navigate to more pages as your test requires. Using waitForPageToLoad in conjunction with initRobot in this way enables you to write long-running tests that can navigate across links and form submits within your application. [Less]
Posted over 15 years ago by dylan
I'm pleased to announce the launch of the Dojo Foundation web site. I first demonstrated the site at Dojo Developer Day V in Boston a few weeks ago, and we're pleased to have something worthy of representing the foundation. It would not have been ... [More] possible without the hard work and assistance of Torrey Rice, Chris Anderson, Tobias Klipstein, and Dustin Machi. The Dojo Foundation site is minimalistic by design, and is powered by Dojango, making it one of the first sites to use this toolkit for making Dojo and Django integration easy! [Less]
Posted over 15 years ago by chrism
If you were at Ajax Experience or Dojo Developer Day a couple weeks back, you probably saw the quick demo of the Project Zero Visual HTML Page Editor tool that I mentioned in my last post. This tool is written entirely in Dojo (currently 1.1 moving ... [More] to 1.2), and allows drag and drop editing of HTML pages that contain Dojo widgets, and allows you to edit the source of the page, and switch back to visual design mode.  A short video that takes you through the tool is now available here. [Less]
Posted over 15 years ago by dylan
Right on the heels of the Dojo 1.2 release, Aptana added support for Dojo 1.2. Aptana has been working closely together with Dojo and SitePen to make support for new Dojo releases extremely timely. Such timeliness is a result of the working ... [More] closely with Aptana's Ajax Wrangler: Lori Hylan-Cho, whose blog post features the news and instructions for how to get support for Dojo 1.2 running in Aptana Studio [Less]
Posted over 15 years ago by bill
Dojo 1.2 has a lot of great new features. There are two in dijit I'd like to (very briefly) mention: The first is all the look-and-feel improvements. We've changed colors, margins, etc. to make things look better, and look/work more like users ... [More] expect, and perhaps mainly we've fixed a bunch of visual glitches There were almost 100 things fixed with visual issues with the widgets. The other change in 1.2 that I want to point out is the attr() support and the underlying changes to make that happen. You should now be able to set almost any widget attribute dynamically via the attr() method, even ones that you couldn't set before via any method... so this isn't just a change in method naming. The key to supporting this is to "liberate" any parameter-related code from postCreate() and move it into either custom setters (which are called automatically on creation) or into the widget's attributeMap. attributeMap is now much more powerful than before, as it let's you control widget attributes that map to DOM node attributes (like the disabled flag on a button widget), or to DOM node innerHTML (like the title of a TitlePane), or to a CSS class (like the icon inside of a button). See my earlier post for details. Bill [Less]
Posted over 15 years ago by dante
A lot has been going on in Dojo-land, and we're finally ready to push out the release. I have tentatively named this the 'Sliced Bread' release, as over the last six months each new item has truly felt like "the best thing since sliced bread". This ... [More] iteration of the Dojo Toolkit has a very large delta, countless levels of innovation, cleverness, polish, and usefulness, as well as several internal changes regarding the overall project. I'd like to thank the numerous committers, contributors, and companies for all their hard work. We've had over 1100 issues addressed -- One might wonder when we find time to sleep, if at all. Truth is: Dojo never sleeps. With representatives from all across the globe it is safe to assume that someone somewhere is trying to fix a bug or implement some new feature at all hours of the day. It is hard to even begin to explain the innovation and hard work that has gone into this particular cycle -- it is almost overwhelming. Perhaps we should release more often and have shorter announcements? You should be able to drop this one in your project without worry, just like the last. We've been maintaining a release notes page to log the additions and potential migration issues one may encounter, and have seriously stepped up the documentation effort (a great deal of thanks goes to Marcus Reimann for keeping us all in check and all his hard work, and all the core developers who have shifted out of "code mindset" to take the time to fill in the blanks). There is still a ways to go, but the Dojo Toolkit API surface is rather large, and a moving target ... ) One thing to note: I've taken over as project lead of the Dojo Toolkit - trying tirelessly to fill the enormous shoes left by Alex "slightlyoff" Russell. Granted, I've not been particularly vocal about my thoughts on the future of the project just yet -- We've all had our heads buried in code for so long now, there really hasn't been a moment to stop and think about it. The momentum of the code since we cut 1.1 was great, and directly in line with how I felt anyway -- so there really was no need. If anything, our vision has become more clear, our unity greater, and our collective desire to make the web rock strengthened. Words cannot describe the pride and joy this position brings me, nor can they convey how delighted I am to work so closely with such a diverse and talented group of coders and community members. Enough Mushiness ... you probably want to just download the release. This is the first version released under my oversight -- very exciting stuff if I say so myself! It is also worth noting the changes in the downloads: The "official" release now ships without any non-required files. Adopting the dojo-mini build script as the default download, the Dojo Toolkit now ships by default as an ultra-light / test-free / single 1.8 megabye archive. This includes the full Dojo, Dijit and DojoX project namespaces. For those doing advanced things, the -src archive (~20 megs) still contains the full uncompressed suite of utilities and code, including DOH, ShrinkSafe, Build System, tests, and example code. For the bandwidth-impared, or entirely impatient: Both the Google Ajax library and AOL CDN versions of Dojo are already available and ready for hot-linking, for you Cross-domain JavaScript pleasures. For those not wishing to download the whole toolkit, and just use Base Dojo (dojo.js) for the awesome utility library it is, feel free to just download the base file, and put it anywhere in your app, though understand you won't be able to use the function dojo.require() for anything meaningful. SitePen has updated the Dojo QuickStart Guide to reflect some Base functionality new in 1.2. If this is your first time with Dojo, you will really enjoy the read, and will certainly get you going in the right direction. Enough jabber! Thanks again to everyone involved in putting this monumental release together, and here's to 1.3! [Less]
Posted over 15 years ago by peller
We're rolling back the rollback. I'll tell the whole story here, but the history won't matter to most of you. In short, dojox.editor is going back to what it was when it was created a few months ago - a place for extensions to the dijit.Editor. ... [More] For Dojo 1.2, it will include a few experimental plugins, courtesy of Dustin and Mike. dojox.editor.plugins.TablePlugins - table creation and editing, missing since 0.4, is back! dojox.editor.plugins.UploadImage - leverages dojox.form.FileUploader and Flash to upload files The long story. A lot of work went into refactoring dijit.Editor for 1.2 to remove the iframe from the implementation, but it wasn't finished up in time so we ended up rolling it back. For a little while, a copy of the refactored code landed in dojox.editor. We decided that it was confusing to have two copies of the editor around, especially when there were known regressions in the new one. Add to that, the fact that the new plugins were co-located with the refactored editor and its dijit plugins, yet they were not dependent on each other. So, we removed the refactor from the trunk in time for the 1.2 release. What remains is dijit.Editor, much the way it was in Dojo 1.1 with some bug fixes, and the dojox.editor subproject with just a couple of experimental plugins for the editor. If anyone wants to play with the refactored edior code, simply get dijit.Editor from [14991] We will eventually be creating a branch off that revision for continued development and testing. [Less]