Dojo 2.0

I have been working on a few things which I will be trying to campaign for as being part of Dojo 2.0. Almost everything I have been working on is included in my d2-proto GitHub repository. I want to cover off some off some of my thinking.

Object Composition

declare has been the foundation of the “class” structure of Dojo. I know Eugene has developed dcl, which is the successor to declare, but one of things that I think declare doesn’t do is actually embrace JavaScript and its prototyping based language with constructor functions. Instead it tries to take the path of classical OOP inheritance.

The first challenge with this is that it leads to confusion among those developers coming from other languages. I remember recently having a debate with a developer trying to figure out how to create static methods in Dojo/JavaScript.

Something that I think embraces the concepts of prototypes and constructor functions is Kris Zyp’s ComposeJS. It provides a way to “compose” prototypes and constructor functions. It also leverages the concepts of Aspect Oriented Programming that have heavily leveraged in “modern” Dojo, in particular the join point advice.

I have brought that into my Dojo 2.0 repository and have added an important key element in my mind, in embracing ES5 property descriptors. By using the compose.property() decorator, you can generate ES5 properties in your prototypes:

In addition the accessor descriptors understand the other decorators, like the AOP ones of compose.before(), compose.around() and compose.after() to be able to provide very feature rich direct property access while not requiring discreet property accessors like the current Dojo/Dijit .get() and .set().

Object Observation

In Dojo 1.5 dojo/Stateful was added, which was a core module that provided the concept of discreet accessors and the ability to “watch” properties for changes. .watch() was a debugging concept added to SpiderMonkey (Mozilla/Firefox) to allow a level of debugging and never became mainstream. dojo/Stateful adopted this. The concept is very useful, to know when a property changes and do something about it. Now, ES5 properties deal with the ability to allow direct property access, they don’t necessarily provide a easy way to actually observe changes to your properties, for key concepts like data binding.

ES6 has the Harmony project, which part of it is specifying Object.observe() which provides in native code this feature of being able to observe changes to objects. Of course waiting around for ES6 and Object.observe() is untenable for most. So in order to solve this problem for now in Dojo 2.0, I have developed a new foundation module named Observable which tries to provide a very similar API to that of Object.observe() but only leveraging ES5 features. It should be possible to offload this to the native API when generally available.

DOM Manipulation

Within the committers, there has been a fair amount of debate around the direction of DOM manipulation in Dojo 2.0. The current thinking is to potentially adopt JQuery 2.0 as the DOM manipulation engine of Dojo 2.0, or to rewrite/refactor the existing DOM manipulation modules. There is also some who think that Kris Zyp’s put-selector is worth consideration. I wanted to understand more about put-selector and so I brought it into my repository as put. Once I got my hands on it, I was impressed with it. For those not familiar with it, it leverages the concepts of CSS selectors to perform DOM manipulation. Instead of having a complex API to create, modify, move and place DOM nodes and structure, you simply use the one function of put. You likely have been using one form or another of CSS selectors to select your DOM, which are the core of both JQuery and dojo/query, so why not extend that paradigm to DOM creation and manipulation?

There are those who are familiar with it who feel that its concise syntax can easily lead to abuse and confusing code. This is potentially true, but like all powerful tools, they can be misused. This is true of RegExp, which can easily lead to confusing code, but they solve complex problems much quicker. My biggest argument is that DOM selection requires a good grounding in CSS selectors, so why not build on that knowledge? Developer’s being aware that other people may read their code is far more important than hobbling people by dumbing down their tools. A good developer should document their RegExp just as well as they document their selectors.

Declarative Syntax

As many of the Dojo committers know I am a huge supporter of having functional parity between JavaScript and the Dojo declarative syntax. The declarative syntax is one of the main reasons I was drawn to Dojo in the first place. While I don’t use it personally anymore, I have quite a lot of interest in the dojo/parser which drives the declarative syntax. Some of my first significant contributions to Dojo are around the parser.

So of course I couldn’t pass up the opportunity to provide a new parser as a candidate for Dojo 2.0. From an API perspective is is very similar to the current dojo/parser and has the same features, but it wholly drops the legacy of the current parser. Also, it is designed to be a “speed demon”. My tests indicate for certain tests it is about twice as fast as the current parser and is even 20% faster than the dojox/mobile/parser which was specifically designed to be efficient on mobile devices.

It still needs additional work, but my hope is that there will be only a single parser for Dojo 2.0, once that is as lightweight and performant as possible for mobile, provides only what is needed for complex templating and is as feature rich for desktop applications. It might even be possible with dojo/has to provide opportunities to built code that is tailored to the environment.

Widgets

With, in my opinion, two of the core parts of the toolkit being different in Dojo 2.0, it sort of calls for a different path with widgets. If object composition and DOM manipulation change, there is an opportunity to take a look further. Dijit has been a great strength of Dojo. Again, it was one of the reasons that I was drawn to Dojo in the first place, because I wanted a JavaScript toolkit and a widget framework as “one”. When I was looking around in 2008, there were very few that combined the two well, if at all.

But in my opinion, time has marched on. Dijit has become solid, feature complete, but also relatively feature static, because the number of people and organisations that have built on it. It is now almost impossible to make a significant improvement in Dijit without causing havoc somewhere else downstream. People need that dependability and stability though. There is a lot of good in Dijit, a lot.

While I understood how to create my own Dijit based widgets, I will admit I didn’t understand deeply the underpinnings of Dijit and widgets in general. Therefore I wanted to see if I could create a widget based on compose and put. I have created a very rough Widget base module based on this. This gave me enough to think about.

What is clear to me now is that while there needs to potentially be a Dijit 2.0 that works off Dojo Core 2.0, there needs to be another widget system as part of the eco-system that can start afresh, being informed by Dijit but being unleashed from the “shackles” of the Dijit 1.X legacy. I think there are some key points that a new widget system needs to address:

  • Isomorphism – The concept that the widget shouldn’t care about where it is rendered.  As server side JavaScript continues to mature and there are constraints on the speed and power of mobile devices, isomorphism becomes even more important.  One of the aspects of put is that it can render DOM strings without the need of the likes of esprima.  It is quite conceivable isomorphism is achievable without too much work.
  • Data Binding – Some additional solutions have been added to Dojo to solve this problem, like dojox/mvc, but ultimately I think that data binding is something that should be built into the core of a widget framework, instead of an adjunct.  While Dijit has always been able to read from a data store to populate things like select lists, it hasn’t been as robust in being to update records and state based on binding to underlying data.
  • Responsive – “Modern” widgets need to be built from their foundations to meet responsive design.  Right now, the current solution is to touch enable Dijit and then develop dojox/mobile.  And while there has been a lot of work between dojox/mobile and Dijit, they are essentially two different projects with dojox/mobile being built ontop of some of the foundation of Dijit.  The need to have efficient code on the mobile platforms has led to a significant “fracturing” of the bases of Dijit to maintain the two different sets of widgets.  The current framework also lacks any specific API features that allow developers to build responsive widgets.  A new framework should build this into its foundations.
  • Bundling of all 3 Technologies – Widgets are not just JavaScript objects with a DOM structure.  There is a 3rd technology that is often relegated as a 2nd class citizen, that being CSS.  Visual presentation and theming.  To me, this has always felt slightly disconnected and there is no easy way of “bundling” or even “unbundling” the necessary CSS to run a widget in the current Dijit framework.  There is no way for a widget to ensure its styling is loaded, nor a way to encapsulate it, so it can be composited into a “built” widget.

There maybe other key factors in widgets for Dojo 2.0, but in my mind these are the key things at the moment we don’t have a ready answer for.

Packaging, Wrap and Toolchain

One of the main areas that I first expressed a lot of my thinking for Dojo 2.0 was in the area of packaging, wrap and toolchain. It is the areas where my thoughts were the most mature. As far as packaging, I laid out my thoughts as well as took a lot of feedback from others in the Dojo 2.0 Packages and Distributions. From a toolchain perspective, I laid out my thoughts about package management, again with lots of feedback in Dojo Toolkit 2.0 Package Management. There is more, but probably best left to a future post or discussion!

In Conclusion

As always, these thoughts are my own, they don’t represent the way Dojo 2.0 will go, but they lay out my thinking and will form the foundation of what I will “campaign” for in Dojo 2.0. I always have always said, I would look into the items I was interested in instead of trying to “boil the ocean”. I may pickup more items that are of interest to me as we get further down the road to Dojo 2.0. The biggest thing I would like to do though is encourage (provoke?) others to start making their case for Dojo 2.0.

ES5 Accessor Properties

I have been messing around this weekend with Object.defineProperty and realising some unanticipated behaviour when using accessor properties and prototype inheritance. Essentially it boils down to this, accessor properties are only ever owned by the Object they are defined on.

For example, if I were to do the following:

Now I hadn’t actually expected that. So you have a choice, in your constructor function, you can iterate through your prototype, looking for accessor properties and copy the property descriptor directly on the instance, or you can leave well enough alone.

Dojo and Node

Using client side JavaScript frameworks on server side NodeJS? “Crazy…” I can hear you mumble to yourself.  Well, I tend to disagree.

The biggest advantage, is that you have already merged your language both client and server, so why not merge your coding style?  Having less of the basics to remember is always a good thing.  It lets you focus on being more productive.  I have been working on a project for a while, and one of my personal requirements was to make it as Dojo like as possible.  Mainly as an experiment, but also to learn how “challenging” it would be.  In this post, I want to share with you some of the lessons I have learned.

My opinion, as well as others, is that CommonJS and Node abandoned AMD because “we don’t care about browser user agents” and while that sort of posturing is maybe self-gratifying, it leaves those who don’t have the luxury of just coding for the backend frustrated and confused.  Some may consider it a religious argument, but there is little overhead and nothing “bad” about AMD running server side.  Certainly there are some aspects of AMD that aren’t needed under server side JavaScript, but the only real “issue” is that generally you are a little more structured with your modules (woah, organisation and structure, I know, we can’t really have that going on).

Bootstrapping

The first thing you would need to do is bootstrap an AMD module loader.  There is some good information on using RequireJS under Node (including information on how to define AMD modules to work without an AMD loader under Node), but the Dojo Loader is my loader of choice.  So a basic sort of bootstrap would look like this:

This of course assumes that your Dojo installation is located in the ./src directory along with your custom modules located in ./src/app.  This of course will only load the Dojo Loader and nothing else, which would be a very boring application.  What I usually do is specify a root module, which then does whatever is needed by my application.  For example:

And now we just need an AMD module to load.  Your main module could either be an AMD require() or AMD define().  I won’t go into the subtleties of these two, but usually your main “block” of code should be a require(), but you should note that relative MIDs don’t work with the standard require(), but need the context sensitive require().

I usually name my “bootstrap” code server.js and place it in the root of my project.  Then all you need to do from the command prompt is:

Loading AMD Modules

Well, this is simple. Assuming your package map in your config was accurate, you just load modules to your hearts content. So maybe we want to create a module that creates a “deferred/promise” based timeout named wait.js:

And then to use the module, we update our main.js:

Loading Node Modules

You might be saying “oh, this is great, but I have this Node module I want to use, and you replaced my require() with this AMD stuff.”  The good thing is that with Dojo 1.8, a new plugin module was included named dojo/node.  This module will allow you load a regular Node module.  For example, if we wanted to load a file:

dojo/node resolves modules in exactly the same way the Node native require() does, which is relative to where you loaded the the Dojo Loader.  So that means it will look in ./node_modules for local modules if bootstrapped the loader from the ./server.js.

The Dojo Way

Now that you are loading Dojo and other AMD modules, you might be feeling a bit more comfortable. One of great things about Node is that it is non-blocking and most of the APIs are designed to run asynchronously, which in a lot of ways is the Dojo way. The thing that will smack you in the face though is the propensity for Node like APIs to use callbacks instead of a promise based architecture to provide this asynchronous type of programming. I personally just even hate the way the code works, calling a callback calling with an err argument as the first argument and then any result arguments after that.

When I started doing a lot with Node, I first just tried to adopt the style, but really started getting frustrated, especially when having long chains of asynchronous functionality and I quickly longed for the day of Dojo Deferreds, so I started writing functions to provide the promise interface.

Doing so is relatively straight forward, let’s for example, create a module that provides the Node fs.readFile as a promise based call.  To start off, a Node style code block would look like:

Now let’s create an AMD module, based on Dojo Deferred, that provides this API:

Then we need to use our new module:

Now that feels significantly more “Dojo like”. Of course you could spend all day wrapping functions, which is why I created setten. While it is still early days (and in writing this article I have realised some benefits I could provide) it is available to make your Node code more “Dojo like”. I know there are other libraries out there that deliver a promise based system for Node, but I am not aware of any that are based off of the Dojo Promise API.

Other Thoughts

I will admit I am still learning every day on how to get Dojo and Node working well together, but I am pretty happy with the results. I am also finding that a fair few of the libraries out there provide CommonJS and AMD module support and those are even easier to integrate. What is great about libraries that support AMD is that you have even more seamless experience between the server and the browser user agent.

One of the early mistakes I did was trying to keep my server-side modules as a “sub-package” of my main package where I would keep shared or client only modules.  The problem becomes when you go to build your client code for deployment, having all your server code mixed in isn’t so good.  So I usually end up with three AMD packages in my ./src directory, which would be something like:

  • ./src/app-server – Server Only Modules
  • ./src/app-client – Client Only Modules
  • ./src/app – Shared Modules

Another thing, which I mentioned briefly above, is that your working directory for all of your code is the path where you invoked the node binary. So, if you are dealing with paths for files, even in modules buried in packages somewhere, remember what your working directory is.  Relative define() paths work as the Dojo Loader figures out the absolute path based on the configuration.

I have four main projects that are build around Dojo on Node.  They are in various states of development and were done at various times of my “learning” of how best to do Dojo on Node, but they maybe able to give you some ideas:

  • dote – A open collaboration forum
  • dojoment – A markdown based documentation wiki
  • doscuss – A community discussion/support forum
  • kitsonkelly.com – My personal website

Future

One of the things I want to play with at some point in the future is using Dojo DOM modules against the jsdom package.  James Thomas has done a server_side_dijit project, so it does work, but I haven’t tried myself yet.

I have yet to find any significant disadvantages or limitations with using Dojo server side, and like I said at the start, it is a lot easier to have to deal with just one style of coding and since Node style of coding doesn’t work client side, why not do Dojo-style coding server side?

Equality versus Fairness

Being an American expatriate, especially living in the United Kingdom, gives a particular perspective on US society.  I think this is highlighted during the Presidential election, when you see Americans en masse struggle with the concepts of equality and fairness.

Equality and fairness or concepts that are easily confused.  Especially because one of the main tenants that the United States was founded on was equality, it is easy for that to be muddled with the concept of fairness.  My opinion though is the aggrieved, both liberal and conservative, are arguing about fairness.  Equality means everyone, no matter what, is treated in the same way by the government and society.  Ultimately Americans don’t want this.  This is actually a far more French concept, where muslims are made to remove their religious garments in order to ensure equality.  It is manifest in Northern European governments and societies, where people are taxed into equality by the government.  Americans don’t want that.  A land of opportunity cannot survive on equality.

Fairness on the other hand, is that people are treated consistently and in line with the merit of their actions.  This is what Americans are really getting at.  It is what is driving the concern about the 1%, feeling that somehow the 1% were able to get a special “member’s only” pass.  The right argue that if people don’t work, they shouldn’t get money from the government.  Again, fairness at the root of it all.

In the UK, while equality and fairness are often confused, it is fairness that bubbles to the top.  People, consciously or unconsciously get upset when the rules of fairness are broken.  Fairness is far more important than equality in the UK.  The political parties mention it in their rhetoric and policy often enshrines this, versus equality.  Equality is presumed to a degree, but fairness will trump equality every time.

In the current Presidential election, I think it is generally unstated, but the main driving factors behind who will vote for which candiate is actually far more about their perceived fairness than anything else.  Those voting for Romney will do so because they believe “hard work” will be treated fairly, and if you have been successful, you will be protected and not treated unfairly.  Those voting for Obama believe that a large portion of the population haven’t been historically treated fairly and haven’t had the right opportunities to take advantage of and see that Obama will continue to protect those at the cost of those who can afford it.  Both sides believe the other candidate will treat a segment of society unfairly.

If we were to move our dialogue to better articulate fairness, to better understand what is really important to us, then we might actually move forward the conversation.

Socialised Medicine Again…

Unlucky for me, I have had another up close and personal experience with the “evil incarnate” that US public opinion raised me on of socialised medicine.  Paul Ryan’s recent comment about “death panels” have made me again desire to speak about my experiences.

The last time I had an up-close and personal experience was after I fractured my eye socket in a biking accident on holiday.  This time, it was the less interesting occurrence of a kidney stone, and not just any sort of “mild” kidney stone, a kidney stone that had me reeling and withering in so much pain, I went to the A&E (ER for you Americans).  It was an interesting comparison because I had gone to the ER about 13 years ago when I had my first one (only after it had passed though and caused some damage on the way out).  So I feel uniquely positioned to compare and contrast the two experiences.

I arrived in the A&E about 11PM on a Thursday.  There was some poor kid coughing is his head off, with tears streaming down his eyes.  I figured I must be in the right place.  There were a couple of other random people waiting in the waiting area.  So far, no “death panels” in sight, thankfully.  Maybe they were taking a break.  Anyways, after a few moments of standing there in excruciating pain while the poor lady looking after the kid had to explain that his parents had had to return to South Africa for two weeks and they had only recently moved up to Scotland and he was staying with them while his parent’s were abroad.  While this all happened a nurse had come out and collected him and shuffled him off to the back to start treating him.

Finally once the poor accidental guardian of a broken kid provided his information it was my turn.  A couple of quick words, because I was already “in the system” the receptionist said take a seat and someone would be with me.  I guess because I wasn’t bleeding and appeared to have all my limbs, I had to wait about 10 minutes for a nurse to see me.  The nurse practitioner called me into a side room to triage me, asked me a few question.  Because of the type of pain and having had a kidney stone 13 years ago and a very mild one recently, I was pretty sure what it was, but who knows, maybe I small animal at imbedded itself in my insides and was gnawing away.  She was obviously convinced I needed to be “seen” more properly and made me go wait out in the waiting area for another few minutes.

A couple minutes later she called me back and put me into one of the bays in the A&E ward.  At this point the pain was as worse as it had been and I was getting a bit delirious, having a hard time breathing and sweating quite a bit.  It seemed like things were taking forever, but looking at the clock, it had been about 20 minutes since I had shown up in the A&E.  Finally it was time to gown up and wait for a Doctor.  The nurse said that she could give me some pain relief, but until I had passed some fluid and they had analysed it, the only thing she could offer was a suppository version of an analgesic.  I thought about it for about 10 seconds, hopped up on the table and dropped my pants.  There is pain and then there are kidney stones.

I had to wait another 15 or so minutes after being violated by the lovely nurse.  It seemed, placebo effect or not, that the edge had been taken off the pain, so while I couldn’t even begin to think of sleeping or concentrating, I at least wasn’t going to go mad, which is where I thought I had been headed.  A lovely doctor, who introduced herself as a student doctor, came in and asked me some more questions, poked and prodded me and pretty much confirmed that it was serious.  She put an IV into my arm incase they needed to put me on some sort of drip as well as take some blood samples and then she said she was off to consult with one of the Consulting Doctors (UK terminology for “the real doctor”) and I was able to provide a urine sample.

Nursed popped in every once and a while to take my blood pressure, temperature and heart rate.  Eventually the “real” Doctor showed up, they having screened my blood and urine.  The good news was my blood was fine, bad news was that there was blood in my urine and protein, which indicated that I likely had a kidney stone.  She indicated that I would probably need to spend the night in the A&E and go for scans in the morning.  She saw that I was rather uncomfortable with this and suggested that she could give me more pain medication (orally thankfully this time) that was a bit stronger and see how I responded in 30 minutes.  By this time it was about 01:30.  I said OK and was given some of the “hard stuff”.

30 minutes later, I had started to doze off, while I was far from out of pain, I had dropped down to about a 6-7 out of 10, more than enough to start to drift off.  Seeing how well I was doing, the doctor agreed that I could go home and told me that they would call me for my scans between 09:00 and 10:00 to tell me when to come in that day.  She gave me some more pain meds (all prescriptions are free in Scotland, so there was no paperwork at all for me to sign, nothing to do but grab my meds, call a taxi and head back).

This is where I got worried that this was all “too good to be true”.  In the US, my one trip to the emergency room went at a similar pace (except I wasn’t just handed my prescriptions, I had to take them to a pharmacy to get them filled) but it was years (literally) of dealing with all sorts of “suppliers” to the ER chasing down every bit of the paperwork between me and the insurance company.  It seems that they somehow some of those claiming in the ER had gotten my address slightly wrong and the insurance company denied those parts of claims, so they came after me.

So I was worried now that I wasn’t in the care of the A&E, I things could be that efficient and my previous experiences must have been wrong too.  About 09:35 I got a call from the scanning department at the hospital, they wanted me to come in at 12:45 that day and told me where to go.  I said sure and showed up on time, where the receptionist found me quickly on the system and told me to go down a hallway and take a seat outside the CT scanning doors.  A nurse/technician came out to talk to the old man sitting next to me and said “oh, you must be Mr. Kelly” when seeing me, to which I replied in the affirmative.  She said “oh it will be just a couple of minutes”.  Again, I thought she must be wrong, that they can’t be that efficient.  Sure enough, about 5 minutes later (5 minutes after my appointed time) I was brought into the CT room, ran through the scanner and then escorted by the technician to  an X-Ray bay, because the doctors in the A&E wanted some X-Rays too.  She said that the results would be sent to the A&E and I just needed to go back there, check-in and one of the A&E doctors would tell me the results.

I went back to the A&E, explained myself and they found that I was in their “waiting to show up again” pile.  They had me take a seat and I waited about 45 minutes to see a nurse, where the first little inefficiency I saw happened, where she didn’t know what I was there for, until I told her and she said said “oh, ok, well, I will get one of the doctors to look at your results”.  I waited another 20 minutes and one of the doctors (yes, to you Americans, you might be surprised to find doctors actually come and get their patients in the UK) comes out to escore me to a bay and then tells me that the I have a couple of stones, that they are no longer blocking the urinary tract (which is why I wasn’t in that much pain by now) but they likely would need to be removed.  She prescribed me a couple more types of medication to help me and went off to get them for me.

Again, I walked out of the A&E, prescriptions in hand, having not signed a single thing or paid one penny.  I know that not all experiences are as good as mine, but I think in a lot of ways it is what you get used to and so the bar gets raised.  While there are cost based evaluations of treatments and not all treatments get approved, like some experimental cancer drugs that haven’t been proven to significantly improve survival rates or quality of life, which I am sure makes a big deal to those who are affected, I think in a civilised society, you shouldn’t every have to worry about if you will get treatment for the things you need.

Be you rich or poor, seeing a doctor, getting your medication and having your health looked after is a right as a member of British society.  I don’t understand why many Americans don’t see it that way.  Many effectively say “no system is better than an imperfect system” or they don’t want to “pay for those who don’t pay into the system”.  That seems ludicrous to me.

Criticism of Dojo?

With the release of Dojo 1.8, I thought I might share my thoughts on criticisms of Dojo and if I think they are valid.  So I am going to try to take an honest look at those criticisms and give my opinion of where the community is at.

Dojo is Slow

One of the biggest criticisms levelled at Dojo.  It is a largely historical one.  With Dojo 1.7 and the full introduction of AMD, a lot of it is no longer valid.  The challenging thing is that Dojo does let you do “stupid things”.  Most complex software lets you do “stupid things”.  Running against an unoptimised source distribution is a bad idea.  Trying to deploy a complex application against a CDN is a bad idea.  But I would not say it is fair to write Dojo off for that.

Dojo is Complex

Yes.  There are over a 1000+ modules in a distribution.  It is everything and a kitchen sink plus a little more.  Is that complexity intimidating?  It can be, but it isn’t insurmountable.  It is true, other libraries are less complex, but then you do spend your time re-inventing the wheel or trying to find an add on library that might work and hopefully not clash with something else.  One of the big factors with the mode to AMD though is to try to make it easier to pick and choose what you want out of Dojo and other libraries.

The Documentation Sucks

A hopefully historical point.  While the documentation with 1.8 isn’t perfect, it is a significant step forward.  Not only did we change a lot in 1.7 we also broke a lot of the documentation, most notably the API Viewer.  It is now working.  The reference guide has had over 1500 edits to improve it and there have been vast improvements to the tutorials with even more coming.  While it isn’t an excuse, in relative terms my opinion was were really in the middle of the pack.  Hopefully we have moved closer to the top.

Contributing is Hard

Yes, but it is easier with the eCLA available here.  The Dojo Toolkit and the Dojo Foundation take their openness seriously.  On a moral level, open is open and stealing other people’s work without their permission is bad.  I personally take pride in the fact that every line of code in Dojo Toolkit has someones name against it who has attested it is their own work.  That level of commitment to clean IP garners a significant level of respect from some large corporations who invest their time and money into Dojo too.  So contributing to Dojo has a bit of hassle, but the benefits of being totally serious about being open and free are worth it.  On the other hand, being relatively new committer on the scene I have found the community very welcoming and encouraging.

ES5/HTML5/CSS3 are Good Enough

All three of these (and eventually ES6) are dramatic improvements over what was there just a few short years ago, and some of the core capabilities of Dojo are better served by “native” functionality.  But Dojo isn’t about querying and manipulating the DOM our building a website, Dojo is about building enterprise applications.  As the underlying technologies continue to mature, Dojo will and does back away from trying to solve those problems.  But if you just go with underlying technologies, you will quickly find your self collecting small snippets of this and that to try to solve a problem and sooner or later you end up with a frankenstein of an application.

Dojo is Old

I prefer the word “mature”.  It is true that in internet terms, Dojo is very old.  That doesn’t mean it has “jumped the shark” though.  What I really like about 1.8 is that some of the core APIs were swapped out with wholly new implementations (e.g. dojo/request and dojo/promise).  That is a sign of a toolkit that can adapt and change.

Yeah, but NodeJS will Change Everything

Been there, done that, got the t-shirt.

I Don’t Like AMD

I am not very fond of watermelon.  In my opinion that is a slightly more logical stance than not liking AMD.  While there is “debate” about AMD and its applicably in the server environment, I don’t know of a reasonable logical alternative.  Having a well defined, dynamically loadable, module structure is necessary.  It isn’t optional.  Dojo already was well structured this way and AMD improved upon that.  If there was one benefit you can’t have because of AMD, then I might be willing to change my mind.

There Isn’t Enough Market Penetration

Ok. Potentially accurate.  A lot of usage of Dojo isn’t on the public facing web.  Dojo is used to build a lot of enterprise applications and is incorporated in many commercial products.  Dojo market share on the public web does continue to grow, albiet slowly.  But, market share doesn’t necessarily have bearing on stability, maturity, openness, performance, or really anything that someone should consider when selecting a toolkit.  It’s market penetration doesn’t also reflect on the capabilities of the core community and committers.  Most open source projects have a small core of people who actually are active in the project and Dojo is far from one person maintaining a code base, which you can get is some fairly significant projects.

Conclusion

For me, it is always good to be “self critical” so that you can challenge yourself to improve the things you don’t have just quite right.  Dojo Toolkit, like anything out there, doesn’t have everything write, but in my opinion it has a lot less wrong than you might assume and it has a lot of people that care about it that are working to make it better.  I am quite excited about what will come over the next year.  I am personally confident Dojo 2.0 over the next couple of months will start to become something tangible and exciting.  Hopefully we can build on the 8 years of lessons learned during the course of the project to provide a toolkit that will meet the next generation of thin client applications.

Goodbye TextMate 2, Hello Sublime Text 2

I tried, I really tried TextMate, but having dealt with an alpha of 2 for a long time with no clear indication of what is going on, dwindling community innovation and just a few really annoying things that I couldn’t continue to put up with, I have this morning tried Sublime Text 2 and I am afraid I am not looking back.

Even though I had invested a fair amount of time in TextMate, when I needed/wanted some of the improvements in TextMate 2, I was frustrated that the project feature had been stripped with the excuse “it didn’t work” and it wouldn’t be coming back.  That is great that the developer had that opinion, but it wasn’t one I shared.  The new sort of “semi finder” bollocks continued to frustrate the hell out of me.  I like the concept of .tm_properites, but it did mean I had to add it to all my .gitignore so I didn’t go around polluting code with my own settings.  The number of times though that things just didn’t feel “contained” and a serious lack of context menues to do functions on files frustrated me.  TextMate had started to “get in the way”, constantly having to setup .tm_properties every time I needed to work on a new set of code, just so I could have something meaningful at the top of the window.  I find myself working in several “projects” at the same time, or I maybe referring to one project while working on another.  It is common for me to have 5-6 open at a time, almost as a real-time “workbin” to remind me that I need to go twiddle something over there.

I had fallen quickly in love with TextMate, having shed myself of the overwrought Eclipse.  I don’t need an IDE, I need a text editor on steroids.  Having fired up Sublime Text 2, even without changing a single setting, it already felt more powerful and less obtrusive at the same time.  Everything is subtle, yet there is a lot there.  A lot more than I would have expected, and within 5 minutes, I had figured out how to create projects and even installed some specialist syntax highlighting.