I've heard the comparison before, and I think it's mostly right. It ignores the amount of effort put into JS's developer ergonomics, though, since assembly is not designed to have a humane syntax (especially modern assembly).
I said "JS is the x86 of the web" a couple of years ago [likely at JSConf], but I can't claim it's original. [Nick Thompson said it on Hacker News this year as well.]
The point is JS is about as low as we can go. But it also has higher-level facilities.
Shaver's right, assembly without a great macro processor is not good for programmers or safety. JS is. So the analogy needs some qualification or it becomes silly.
The mix of high-level functional programming and memory safety, with low-level facilities such as typed arrays and the forthcoming ES.next extension of typed arrays, binary data, make for a more powerful programming language than assembly, and of course memory safety is the first differentiator.
Brendan Eich, again:
Doug's point about source beating bytecode is good. My friend Prof. Michael Franz of UC Irvine long ago showed O(n^4) complexity (runaway compute cycles, denial of service) in the Java verifier. JS is strictly more portable and fast enough to lex/parse as minified source.
Source as "bytecode" also avoids the big stupid Java bytecode mistake: freezing a poorly designed lowered form of Java, then being unable to evolve the high-form source, i.e., the Java programming language for fear of breaking Java bytecode compatibility. This severely messed up the design of inner classes and then generics in Java -- and then Sun broke bytecode compat anyway!
From a YCombinator Thread a while back, Nick Thompson said:
Meanwhile Brendan was doing the work of ten engineers and three customer support people, and paying attention to things that mattered to web authors, like mixing JS code into HTML, instant loading, integration with the rest of the browser, and working with other browser vendors to make JS an open standard.
So now JS is the x86 assembler of the web - not as pretty as it might be, but it gets the job done (GWT is the most hilarious case in point). It would be a classic case of worse is better except that Java only looked better from the bottom up. Meanwhile JS turned out to be pretty awesome. Good luck trying to displace it.
- It's fast and getting faster.
- You can craft it manually or you can target it by compiling from another language.
This topic comes up on Hacker News often.
Have at it. I enjoy our thoughtful, measured and reasoned discussions, Dear Reader. You guys rock.
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
As for the last post, I think it was poorly written and that's my fault.
JS will remain hand-coded as well as generated for the foreseeable future. That's part of what helped it win too: people could view source (better done on github.com now, but still usable with http://jsbeautifier.org/).
On the other hand, a medium sized site that can be maintained by hand full of devs probably is not.
Believe it or not, classical OOP languages -- especially ones with not-very-expressive static type systems -- are not the only way to build large apps. It can be done in JS, but it's harder than it ought to be. This too is driving JS language evolution in Ecma TC39, where I spend a lot of my time (es-discuss at mozilla.org is the unofficial discussion list).
There is still a lot missing, in the language or the browser runtime.
It is a cross-platform or rather cross-browser platform language for building website/applications, but that's as far the analogy goes.
I wish the ECMA specs would require that all built-in object/property labels be over-writable and exposed by for-in loops. That makes it a lot easier to kick the tires of new browsers and correct issues directly when certain vendors settle for "close enough" on their stalled version of the W3C DOM API spec for 10 years.
When the easily emulated classical OOP paradigm is less ideal, however, I am deeply grateful for the ability to throw object literals and functions with baked-in declaration contexts around like they're candy with a node-based markup language and the API for plugging into it being all the 'real-world' structure needed to bring it all together in a sensible fashion.
I did this mock up of Space Invaders around five years ago (and then quickly found it wasn't remotely original, lots of people had already done complete "coin-op conversions" in JS).
The thing that really gets me is the actual working JS emulation of enough x86 instructions to boot up Linux in the browser. Still half suspecting it to be a hoax!
Back to "the assembler of the web" - assembler is notoriously non-portable, whereas JS is surprisingly portable despite what people say.
The real analogy is with C, sometimes called the "portable assembler language".
It has long served as a low-level lingua franca that can be targeted by compilers. Stroustrup's original C++ implementation produced C as its output, as did Eiffel. So JS is the "C of the web".
I'd say that's an easily defensible statement, but saying that anyone who doesn't like ViewState is ignorant was pretty far beyond the pale.
ViewState in ASP.NET is an example of such an abstraction - jQuery is another, but just because jQuery excels in DOM manipulation doesn't mean you should build your entire application in it, but certainly not without it.
More often than not I see ASP.NET developers lack a proper understanding of how ViewState works and to their chagrin find their applications bloated and slow in production albeit fast on their local machines.
Does that mean ViewState is a bad invention? Or is it merely a feat that must be undertaken to produce a well-crafted application ?
@Brendan: regarding CPS, the problem can be solved with callback patterns rather than state machines. I wrote a detailed post about it. It works well with preprocessors and could also be integrated into compilers but I don't know how efficient the result would be, compared to state machines.
I strongly disagree the quote from Nick Thompson that denounces java because: "They wrote their string handling code in an interpreted language rather than taint themselves with C!". The very fact they kept core libraries wholly virtual is why GWT can use java to smooth out browser inconsistencies in JS to do amazing things, like Porting Quake II to run in a browser http://code.google.com/p/quake2-gwt-port/ . Making a virtual language dependent on C at it's core would just make it another C lib, and not a virtual language at all.
Would you be more impressed if you did a View Source and found that it was not only pretty on the outside but also inside?
Actually it depends on your definition of "nice". If you mean it semantically or HTML that actually is valid, then I agree.
If you mean "nice" in terms of readable, well-formatted code, but sacrificing user experience because of document size, no.
Not so long time ago, I would have agreed with both, but nowadays Developers tools like those in Chrome take care of beautifying "ugly" HTML and JS automagically.
Comments are closed.