Categories
Articles Thoughts

JS private class fields considered harmful

Reading Time: 2 minutes

Today I mourn. What am I mourning? Encapsulation. At least in my projects.

As a library author, I’ve decided to avoid private class fields from now on and gradually refactor them out of my existing libraries.

Why did I make such a drastic decision?

It all started a few days ago, when I was building a Vue 3 app that used Color.js Color objects. For context, Vue 3 uses proxies to implement its reactivity system, just like Mavo did back in 2016 (the first one to do so as far as I’m aware). I was getting several errors and upon tracking them down I had a very sad realization: instances of classes that use private fields cannot be proxied.

I will let that sink in for a bit. Private fields, proxies, pick one, you can’t have both. Here is a reduced testcase illustrating the problem.

Basically, because a Proxy creates a different object, it breaks both strict equality (obj1 === obj2), as well as private properties. MDN even has a whole section on this. Unfortunately, the workaround proposed is no help when proxies are used to implement reactivity, so when I tried to report this as a Vue bug, it was (rightly) closed as wontfix. It would not be possible to fix this in Mavo either, for the same reason.

I joined TC39 fairly recently, so I was not aware of the background when proxies or private class fields were designed. Several fellow TC39 members filled me in on the discussions from back then. A lot of the background is in this super long thread, some interesting tl;drs as replies to my tweet:

After a lot of back and forth, I decided I cannot justify using private properties going forwards. The tradeoff is simply not worth it. There is no real workaround for proxy-ability, whereas for private fields there is always private-by-convention. Does it suck? Absolutely. However, a sucky workaround is better than a nonexistent workaround.

Also, I control the internal implementation of my classes, whereas proxying happens by other parties. As a library user, it must be incredibly confusing to have to deal with errors about access to private fields in a class you did not write.

This was one of the saddest PRs I have ever written https://github.com/LeaVerou/color.js/pull/306. It feels like such a huge step backwards. I’ve waited years for private fields to be supported everywhere and relished when they got there. I was among the first library authors to adopt them in library code, before a lot of tooling even parsed them properly (and some still don’t).

Sure, they were kind of annoying to use (you usually want protected, i.e. visible to subclasses, not actually private), but they were better than nothing. I was not joking in the first paragraph; I am literally grieving.

I may still use private fields on a case by case basis, where I cannot imagine objects being proxied being very useful, for example in web components. But from now on I will not reach to them without thought, like I have been for the past couple of years.

Categories
Original Thoughts

On ratings and meters

Reading Time: 3 minutes

I always thought that the semantically appropriate way to represent a rating (e.g. a star rating) is a <meter> element. They essentially convey the same type of information, the star rating is just a different presentation.

An example of a star rating widget, from Amazon

However, trying to style a <meter> element to look like a star rating is …tricky at best. Not to mention that this approach won’t even work in Shadow trees (unless you include the CSS in every single shadow tree).

So, I set out to create a proper web component for star ratings. The first conundrum was, how does this relate to a <meter> element?

  • Option 1: Should it extend <meter> using builtin extends?
  • Option 2: Should it use a web component with a <meter> in Shadow DOM?
  • Option 3: Should it be an entirely separate web component that just uses a meter ARIA Role and related ARIA attributes?
Categories
Rants Thoughts

The failed promise of Web Components

Reading Time: 4 minutes

Web Components had so much potential to empower HTML to do more, and make web development more accessible to non-programmers and easier for programmers. Remember how exciting it was every time we got new shiny HTML elements that actually do stuff? Remember how exciting it was to be able to do sliders, color pickers, dialogs, disclosure widgets straight in the HTML, without having to include any widget libraries?

The promise of Web Components was that we’d get this convenience, but for a much wider range of HTML elements, developed much faster, as nobody needs to wait for the full spec + implementation process. We’d just include a script, and boom, we have more elements at our disposal!

Or, that was the idea. Somewhere along the way, the space got flooded by JS frameworks aficionados, who revel in complex APIs, overengineered build processes and dependency graphs that look like the roots of a banyan tree.

This is what the roots of a Banyan tree look like. Photo by David Stanley on Flickr (CC-BY).
Categories
Rants Thoughts

On the blindness of blind reviews

Reading Time: 2 minutesOver the last couple of years, blind reviews have been popularized as the ultimate method for fair talk selection in industry conferences. While I don’t really submit proposals myself, I have served several times on the other side of the process, doing speaker selection in conference committees, and the more data points I collect, the more convinced I become that the blind selection process is fundamentally flawed.

Blind reviews come from the world of academia. However, in academic conferences, you do not judge a talk by a 1-2 paragraph abstract, but by a 10+ page paper, so there’s way more to judge by. In addition, in academia the content of the research matters infinitely more than the quality of a talk. In industry conferences, selection committees in blind reviews have both way less data to use, and a much harder task, as they need to balance several factors (content, speaker skill, talk quality etc). It’s no surprise that the results end up being even more of a gamble.

Blind reviews result in conservative talk selection. More often than not, I remember me and my fellow committee members saying “Damn, this talk could be great with the right presenter, but that’s rare” and giving it a poor or average score. Few topics can make good talks regardless of the presenters. Therefore, when there is little information on the speaker in the initial selection round, talk selection ends up being conservative, rejecting more challenging topics that need a skilled speaker to shine and sticking to safer choices.

One of my most successful talks ever was “The humble border-radius” which was shortlisted for a .net award for Conference Talk of The Year 2014. It would never have passed any blind review. There is no committee in their right mind that would have accepted a 45 minute talk about …border-radius. The conferences I presented it at invited me as a speaker, carte blanche, and trusted me to present on whatever I felt like. Judging by the reviews, they were not disappointed.

In addition, all too many times I’ve seen great speakers get poor scores in blind reviews, not because their talks were not good, but because writing good abstracts is an entirely separate skill. Blind reviews remove anything that could cause bias, but they do so by striping all personality away from a proposal. In addition, a good abstract for a blind review is not necessarily a good abstract in general. For example, blind reviews penalize more mysterious/teasy abstracts and tend to be skewed towards overly detailed ones, since it’s the only data the committee gets for these talks (bonus points here for CfS that have a separate field for more details to conf organizers).

“But what about newcomers to the conference circuit? What about bias elimination?” one might ask. Both very valid concerns. I’m not saying any kind of anonymization is a bad idea. I’m saying that in their present form in industry conferences, blind reviews are flawed. For example, an initial round of blind reviews to pick good talks, without rejecting any at that stage, would probably solve these issues, without suffering from the flaws mentioned above.

Disclaimer: I do recognize that most people in these committees are doing their best to select fairly, and putting many hours of (usually volunteer) work in it. I’m not criticizing them, I’m criticizing the process. And yes, I recognize that it’s a process that has come out of very good intentions (eliminating bias). However, good intentions are not a guarantee for infallibility.

Categories
Thoughts

Idea: Extending native DOM prototypes without collisions

Reading Time: 3 minutesAs I pointed out in yesterday’s blog post, one of the reasons why I don’t like using jQuery is its wrapper objects. For jQuery, this was a wise decision: Back in 2006 when it was first developed, IE releases had a pretty icky memory leak bug that could be easily triggered when one added properties to elements. Oh, and we also didn’t have access to element prototypes on IE back then, so we had to add these properties manually on every element. Prototype.js attempted to go that route and the result was such a mess that they decided to change their decision in Prototype 2.0 and go with wrapper objects too. There were even long essays being written back then about how much of a monumentally bad idea it was to extend DOM elements.

The first IE release that exposed element prototypes was IE8: We got access to Node.prototype, Element.prototype and a few more. Some were mutable, some were not. On IE9, we got the full bunch, including HTMLElement.prototype and its descendants, such as HTMLParagraphElement. The memory leak bugs were mitigated in IE8 and fixed in IE9. However, we still don’t extend native DOM elements, and for good reason: collisions are still a very real risk. No library wants to add a bunch of methods on elements, it’s just bad form. It’s like being invited in someone’s house and defecating all over the floor.

But what if we could add methods to elements without the chance of collisions? (well, technically, by minimizing said chance). We could only add one property to Element.prototype, and then hang all our methods on that. E.g. if our library was called yolo and had two methods, foo() and bar(), calls to it would look like:

var element = document.querySelector(".someclass");
element.yolo.foo();
element.yolo.bar();
// or you can even chain, if you return the element in each of them!
element.yolo.foo().yolo.bar();

Sure, it’s more awkward than wrapper objects, but the benefit of using native DOM elements is worth it if you ask me. Of course, YMMV.

It’s basically exactly the same thing we do with globals: We all know that adding tons of global variables is bad practice, so every library adds one global and hangs everything off of that.

However, if we try to implement something like this in the naïve way, we will find that it’s kind of hard to reference the element used from our namespaced functions:

Element.prototype.yolo = {
	foo: function () {
		console.log(this); 
	},
	
	bar: function () { /* ... */ }
};

someElement.yolo.foo(); // Object {foo: function, bar: function}

What happened here? this inside any of these functions refers to the object that they are called on, not the element that object is hanging on! We need to be a bit more clever to get around this issue.

Keep in mind that this in the object inside yolo would have access to the element we’re trying to hang these methods off of. But we’re not running any code there, so we’re not taking advantage of that. If only we could get a reference to that object’s context! However, running a function (e.g. element.yolo().foo()) would spoil our nice API.

Wait a second. We can run code on properties, via ES5 accessors! We could do something like this:

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		return {
			element: this,
			foo: function() {
				console.log(this.element);
			},
			
			bar: function() { /* ... */ }
		}
	},
	configurable: true,
	writeable: false
});

someElement.yolo.foo(); // It works! (Logs our actual element)

This works, but there is a rather annoying issue here: We are generating this object and redefining our functions every single time this property is called. This is a rather bad idea for performance. Ideally, we want to generate this object once, and then return the generated object. We also don’t want every element to have its own completely separate instance of the functions we defined, we want to define these functions on a prototype, and use the wonderful JS inheritance for them, so that our library is also dynamically extensible. Luckily, there is a way to do all this too:

var Yolo = function(element) {
	this.element = element;
};

Yolo.prototype = {
	foo: function() {
		console.log(this.element);
	},
	
	bar: function() { /* ... */ }
};

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		Object.defineProperty(this, "yolo", {
			value: new Yolo(this)
		});
		
		return this.yolo;
	},
	configurable: true,
	writeable: false
});

someElement.yolo.foo(); // It works! (Logs our actual element)

// And it’s dynamically extensible too!
Yolo.prototype.baz = function(color) {
	this.element.style.background = color;
};

someElement.yolo.baz("red") // Our element gets a red background

Note that in the above, the getter is only executed once. After that, it overwrites the yolo property with a static value: An instance of the Yolo object. Since we’re using Object.defineProperty() we also don’t run into the issue of breaking enumeration (for..in loops), since these properties have enumerable: false by default.

There is still the wart that these methods need to use this.element instead of this. We could fix this by wrapping them:

for (let method in Yolo.prototype) {
	Yolo.prototype[method] = function(){
		var callback = Yolo.prototype[method];
		
		Yolo.prototype[method] = function () {
			var ret = callback.apply(this.element, arguments);
			
			// Return the element, for chainability!
			return ret === undefined? this.element : ret;
		}
	}
}

However, now you can’t dynamically add methods to Yolo.prototype and have them automatically work like the native Yolo methods in element.yolo, so it kinda hurts extensibility (of course you could still add methods that use this.element and they would work).

Thoughts?

Categories
Thoughts

CSS is for developers

Reading Time: 3 minutesQuite often people assume that because the language I focus on is CSS, I must be a web designer. Don’t get me wrong, I love visual design with a passion. I have studied it a lot over the years and I’ve worked on several design projects for clients. Heck, I even have a dribbble profile!

However, if I had to pick one role, I would definitely consider myself more of a developer than a designer. I discovered coding on my own when I was 12 and so far it has been the most long lasting love of my life. Although I lost my coding virginity to Visual Basic (something I’m still embarrassed about), over the years I’ve coded in Java, C, C++, C#, PHP, JavaScript before I even got to CSS. I’ve actually studied Computer Science at university, graduated 4th in my class and I’m gonna be doing research at MIT towards a PhD, starting fall 2014. Regarding design, I’m completely self-taught. My personality is more similar to the developers I know than the designers I know. Coding comes naturally, but I have to struggle to get better at design. I’m a better developer than I will ever be a designer.

Still, the assumption often is that I can’t possibly be a developer and interested in CSS, when there are all these amazing programming languages to focus my energy on instead. Therefore I must be a designer …right? There are even people who know about my open source projects, and still think that I can’t code in JavaScript or any other programming language (not sure how you can make most of these tools with pure CSS, but since CSS is Turing complete, I guess there must be a way!).

If you think I’m an exception, you’re mistaken. Everyone else in the W3C CSS Working Group, the group which defines the future of CSS, fits the profile of a developer much more than that of a designer. In fact, I might be the most designer-y person in it! Even outside the WG, the people I know who are really good at CSS, are either developers or hybrids (designers & developers).

This is no coincidence. The skills required to write good CSS code are by and large the same skills required to write good code in general. CSS code also needs to be DRY, maintainable, flexible etc. CSS might have a visual output, but is still code, just like SVG, WebGL/OpenGL or the JavaScript Canvas API. It still requires the same kind of analytical thinking that programming does. Especially these days that most people use preprocessors for their CSS, with variables, math, conditionals and loops, it’s almost starting to look like programming!

I find it counter-productive that CSS in most jobs is assigned to designers. Designers should be doing what they do best and love: Design. Sure, they should be aware of the practical limitations of the medium and should be able to read and lightly edit CSS or hack together a prototype to show how their design behaves in different conditions, but it shouldn’t be their job to write CSS for production. The talents required to be a good designer and a good coder are very different and it’s unreasonable to expect both from everyone. Also, when you know you’re gonna have to implement the design you’re working on, it’s tempting to produce designs that can be easily converted to CSS, instead of pushing the boundaries. We don’t usually expect developers to design, even though it’s an added bonus when they have an eye for design as well. It should be the same for designers.

And if you’re a designer who writes amazing CSS and is about to tell me off in the comments, hold your horses. I’m not saying you shouldn’t be coding CSS. I’m saying that if you’re good at it, it means you’re both a designer AND a developer. Own it! 😀

Categories
Speaking Thoughts

What makes speakers happy

Reading Time: 3 minutesI wish I could speak at CSSConf.eu, but unfortunately I had to decline the invitation, as it collided with a prior speaking engagement I had agreed on. I recently got another email from the organizers with an interesting question:

We want to make this event as stress-free for our speakers as possible. Since you spoke at a bunch of events, can you share a tip or two about what will make a speakers’ life easier, and their stay more pleasant? Any typical mistakes we can avoid?

I thought it was lovely that they care about their speakers enough to ask this, this already places them above average. I started writing a reply, but I soon realized this is information that could be useful for other conference organizers as well, so I decided to post it here instead. So, what makes speakers happy?

The baseline

These are things every good conference is doing for their speakers, although they often miss one or two. They keep speakers happy, but they ‘re not out of the ordinary.

  • Cover their flights, accommodation for the entire conference and ground transportation from/to the airport (with a car, not public transport!).
  • Do not expect them to go through the hassle of booking all those themselves and then sending you receipts. Offer it as an option, but book them yourself by default.
  • Do not book flights without confirming the itinerary and personal info with them first. Also, this sounds obvious, but it’s surprising how many conferences have made this mistake with me: Type their name correctly when booking flights!
  • If hotel WiFi is not free, make sure it’s covered and included in their reservation. Same goes for breakfast.
  • Offer a honorarium, at least to those who have to take time off work to speak at your event (e.g. freelancers). Even if your budget is small and can only give a tiny honorarium, it will at least cover their meals, cabs etc while there. If the honorarium is small and mainly intended to cover miscellaneous expenses of the trip, don’t ask them to submit an invoice to claim it.
  • Have a speakers dinner before the event, where they can meet and socialize with the other speakers. This is also good for the conference, as they get the chance to catch up with their speaker friends (there aren’t that many people on the conference circuit, so we often know each other and want to catch up)  so they will talk more to the attendees during the conference. Make sure the speakers dinner does not overlap with the pre-party, if you have one.
  • Do a tech check before their talk to make sure everything is smooth. Have dongles for Mac laptops. Have clickers they could use. Use wireless lapel microphones. Have a reliable private wifi network for speakers to use if they need an internet connection for their talk.
  • Have breaks between talks so they have some margin of going overtime without impacting the schedule. If they are too stressed about going through their talk fast, it won’t be a very good talk.

Going the extra mile

These are all things one or more conferences have done for me, but they are not generally common so they are a positive surprise when they happen, not something expected.

  • Book Business class flights, especially for longer flights where passengers are expected to sleep. It’s so much more comfortable to sleep in a seat that fully reclines! I was incredibly grateful to the one conference that did this.
  • Cover incidentals in the hotel. Yes, it’s a bit risky but come on, we’re not rockstars. We won’t screw you over. In most cases it will be a pretty small extra cost and it looks really good, it tells speakers you trust them and want them to have a good time.
  • Offer a speaker gift bag. It can contain all kinds of things: Stuff that will make their stay more comfortable (stain remover, travel toothbrush etc), souvenirs from the place since we rarely have time to do touristy stuff, alcohol for impromptu get togethers with other speakers, snacks to eat during a late night craving in the hotel room, anything goes and I’ve seen conferences put all kinds of stuff in there. It’s a nice welcome gesture. Bonus points if they’re personalized based on what you’ve researched about the speaker.
  • Send out a survey to the audience after the conference and let the speakers know how they did. Let them know what comments their talk got and how well they did compared to other speakers.

Also, make sure you read PPK’s excellent Conference Organizer’s Handbook.

Categories
Articles Thoughts

One year of pastries

Reading Time: 7 minutesLast September, I was approached by Alex Duloz, who invited me to take part in his ambitious new venture, The Pastry Box Project. Its goal was to gather 30 people (“bakers”) every year who are influential in their field and ask them to share twelve thoughts — one per month. For 2012, that field would be the Web. I was honored by the invitation and accepted without a second thought (no pun intended). The project was quite successful and recently we all (almost) agreed for The Pastry Box Project to become a book, whose profits will be donated to charity.

The initial goal of the project was to gather thoughts somehow related to the bakers’ work. Although many stuck to that topic, for many others it quickly drifted away from that, with them often sending thoughts that were general musings about their lives or life in general. For me …well lets just say I was never good at sticking to the topic at hand. 😉

The Pastry Box showed me that I want a personal blog so I made one today. I will still publish personal stuff here, as long as it’s even remotely web-related, so not much will change. However, my interests range to more than the Web, so I will now have another medium to express myself in. 🙂

Since 2012 is now over, I decided to gather all my “pastries” and publish them in two blog posts: I will post the more techy/professional ones below and the more general/personal ones in my personal blog. Since most of them were somewhere in the middle, it wasn’t easy to pick which ones to publish where. I figured the best solution is to allow for some overlap and publish most of them in both blogs.

Categories
Thoughts

Teaching: General case first or special cases first?

Reading Time: 2 minutesA common dilemma while teaching (I’m not only talking about teaching in a school or university; talks and workshops are also teaching), is whether it’s better to first teach some easy special cases and then generalize, or first the general case and then present special cases as merely shortcuts.

I’ve been revisiting this dilemma recently, while preparing the slides for my upcoming regular expressions talks. For example: Regex quantifiers.

1. General rule first, shortcuts after

You can use {m,n} to control how many times the preceding group can repeat (m = minimum, n = maximum). If you omit n (like {m,}) it’s implied to be infinity (=”at least m times”, with no upper bound).

  • {m, m} can also be written as {m}
  • {0,1} can also be written as ?
  • {0,} can also be written as *
  • {1,} can also be written as +

Advantages & disadvantages of this approach

  • Harder to understand the general rule, so the student might lose interest before moving on to the shortcuts
  • After understanding the general rule, all the shortcuts are then trivial.
  • If they only remember one thing, it will be the general rule. That’s good.

2. Special cases first, general rule after

  • You can add ? after a group to make it optional (it can appear, but it may also not).
  • If you don’t care about how many times something appears (if at all), you can use *.
  • If you want something to appear at least once, you can use +
  • If you want something to be repeated exactly n times, you can use {n}
  • If you want to set specific upper and lower bounds, you can use {m,n}. Omit the n for no upper bound.

Advantages & disadvantages of this approach

  • Easy to understand the simpler special cases, building up student interest
  • More total effort required, as every shortcut seems like a separate new thing until you get to the general rule
  • Special cases make it easier to understand the general rule when you get to it

What usually happens

In most cases, educators seem to favor the second approach. In the example of regex quantifiers, pretty much every regex book or talk explains the shortcuts first and the general rule afterwards. In other disciplines, such as Mathematics, I think both approaches are used just as often.

What do you think? Which approach do you find easier to understand? Which approach do you usually employ while teaching?

Categories
Articles Thoughts

In defense of reinventing wheels

Reading Time: 3 minutesOne of the first things a software engineer learns is “don’t reinvent the wheel”. If something is already made, use that instead of writing your own. “Stand on the shoulders of giants, they know what they’re doing better than you”. Writing your own tools and libraries, even when one already exists, is labelled “NIH syndrome”  and is considered quite bad.