Anne van Kesteren

Undue base URL influence

The URL parser has many quirks due to its origins in a time where conformance test suites were atypical and implementation requirements were hidden in the examples section. Some consider these quirks deeply problematic, but personally I don’t really mind that one can write a hundred slashes after a scheme instead of two and get identical results. Sure, it would be better if that were not the case, but in the end it is something that is normalized away and therefore does not impact the fundamental aspects of the URL ecosystem.

I was reminded the other day that there is one quirk however that does yield rather undesirable results. In particular for certain (non-conforming) inputs, the result will not be failure, but the exact URL returned will depend on the presence and type of base URL. This might be best explained with examples:

InputBase URL (serialized)Output (serialized)
https:testhttps://test/
https:testhttp://example/https://test/
https:testhttps://example/https://example/test
hello:testhello:test
hello:testbye://example/hello:test
hello:testhello://example/hello:test

This quirk only impacts so-called special schemes, which include http and https. And only when they match between the input and base URL. As a user of URLs you could work around this quirk by first parsing without a base URL and only if that returns failure, parse a second time with a base URL. That does have the unfortunate side effect of being inconsistent with the web platform (for non-conforming input), but depending on your use case that might be okay.

I remember looking into whether this could be removed completely many years ago, but websites relied on it and end users trump theory.

WebKit and web-platform-tests

Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.

Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.

I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] --clean-dest-dir on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.

This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:

Contributing to WebKit

Over the last couple weeks I have looked at the WebKit code with the intent of fixing a few things in areas of the web platform I’m familiar with as a personal curiosity. The code had always appeared hackable to me, but I had never given it a go in practice. In fact, this is the first time I have written C++ in my life! Marcos had given me a quick guide to set up my environment and I was off to the races.

It has been a lot of fun trying to get things to compile and making tests pass. And also have the chance to study how things are implemented in more detail. I wish web standards had some of the checks available in C++. Now I am well aware that C++ does not have the best reputation, but English is even more error-prone. Granted, the level of abstraction English sits at can also make things easier.

I can heartily recommend this to anyone who has been interested in doing this, but didn’t because it seemed too intimidating. Mind that the first contribution is a bit of a hurdle and can be humbling. Definitely was for me! I recommend tackling something that seems doable. And remember that as with a lot of things it will get easier over time.

Apple

I have often wondered what it would be like to work for Apple and now I have the privilege to find out! There’s a lot of amazing people on the WebKit team I have had the pleasure of meeting and collaborating with. Getting to do a lot more of that will be terrific. Writing this it’s still somewhat difficult to comprehend it’s actually happening.

Apple itself is the sole institution that develops computers end-to-end for the rest of us. It is hard to overstate how exciting it is to be a small part of that for a person that enjoys computing systems. And WebKit too is truly great. With origins in KHTML it’s a web browser engine that’s over two decades old now. But from my eagle-eye perspective the code doesn’t look it. The community cares about refactoring, long-term maintainability, and hackability of the code, as well as making it more accessible to new contributors with the ongoing move to GitHub. A lot of care seems to be placed in how it is evolved: maintaining compatibility, ensuring new code reuses existing primitives, and standards are implemented to the letter. (Or if you’re Antti, you might not read the letters and just make the tests pass.) And I have reason to believe this isn’t just the community. That these values are shared by Apple.

Suffice to say, I am humbled to have been given this opportunity and look forward to learning a lot!

After Mozilla

As mentioned in my previous post I will no longer be employed by MoCo in July. This might leave some of you with some questions I will attempt to answer here. Most significantly:

I plan to continue being involved in developing the web platform and encouraging folks to leave their sense of logic at the door, but I suspect that until at least September or so I will be mostly otherwise occupied due to a mix of vacation and onboarding. A lot of that coincides with European vacation time so you might not end up noticing anything. If you do notice, reach out on WHATWG chat and presumably someone there can help you or at least provide some pointers. I expect to also check in on occasion.

Leaving Mozilla

I will be officially leaving Mozilla on the last day of June. My last working day will be June 16. Perhaps I should say I will be leaving the Mozilla Corporation — MoCo, as it’s known internally. After all, once you’re a Mozillian, you’re always a Mozillian. I was there for a significant part of my life — nine years, most of them great, some tough. I was empowered and supported by leadership to move between cities and across countries. Started by moving to London (first time I lived abroad) in February 2013, then Zürich in May 2014, Engelberg (my personal favorite) in May 2015, Zürich again in February 2017, and now here in Berlin since September 2018. In the same time period I moved in with my wonderful partner and we became the lucky parents of two amazing children. It isn’t always easy, but I wouldn’t trade it for the world. They bring me joy every day.

It’s been such a privilege and humbling experience to be able to learn about the internet, browsers, and systems engineering from some of the most talented, kind, and caring people in the world in that space. And furthermore, to be able to build it with them, in my small way. They are always seeking to truly solve problems by approaching them from first principles. As well as looking to raise the layers of abstraction upon which we build the digital world. And then actually doing it, too. I recently read A Philosophy of Software Design by John Ousterhout and it struck me that a lot of the wisdom in that book has been imparted upon me by my time here.

I am extremely grateful to my beautiful colleagues, friends, and leadership at Mozilla for making this a period in my life I will treasure forever. So long, and thanks for all the browser engines. And remember, always ask: is this good for the web? ❤️

Farewell Emil

When I first moved to Zürich I had the good fortune to have dinner with Emil. I had never met someone before with such a passion for food. (That day I met two.) Except for the food we had a good time. I found it particularly enjoyable that he was so upset — though in a very upbeat manner — with the quality of the food that having dessert there was no longer on the table.

The last time I remember running into Emil was in Lisbon, enjoying hamburgers and fries of all things. (Rest assured, they were very good.)

Long before all that, I used to frequent EAE.net, to learn how to make browsers do marvelous things and improve user-computer interaction.

Feature detection of SharedArrayBuffer objects and shared memory

If you are using feature detection with SharedArrayBuffer objects today you are likely impacted by upcoming changes to shared memory. In particular, you can no longer assume that if you have access to a SharedArrayBuffer object you can also use it with postMessage(). Detecting if SharedArrayBuffer objects are exposed can be done through the following code:

if (self.SharedArrayBuffer) {
  // SharedArrayBuffer objects are available.
}

Detecting if shared memory is possible by using SharedArrayBuffer objects in combination with postMessage() and workers can be done through the following code:

if (self.crossOriginIsolated) {
  // Passing SharedArrayBuffer objects to postMessage() will succeed.
}

Please update your code accordingly!

(As indicated in the aforelinked changes document obtaining a cross-origin isolated environment (i.e., one wherein self.crossOriginIsolated returns true) requires setting two headers and a secure context. Simply put, the Cross-Origin-Opener-Policy header to isolate yourself from attackers and the Cross-Origin-Embedder-Policy header to isolate yourself from victims.)

Shadow tree encapsulation theory

A long time ago Maciej wrote down five types of encapsulation for shadow trees (i.e., node trees that are hidden in the shadows from the document tree):

  1. Encapsulation against accidental exposure — DOM nodes from the shadow tree are not leaked via pre-existing generic APIs — for example, events flowing out of a shadow tree don't expose shadow nodes as the event target.
  2. Encapsulation against deliberate access — no API is provided which lets code outside the component poke at the shadow DOM. Only internals that the component chooses to expose are exposed.
  3. Inverse encapsulation — no API is provided which lets code inside the component see content from the page embedding it (this would have the effect of something like sandboxed iframes or Caja).
  4. Isolation for security purposes — it is strongly guaranteed that there is no way for code outside the component to violate its confidentiality or integrity.
  5. Inverse isolation for security purposes — it is strongly guaranteed that there is no way for code inside the component to violate the confidentiality or integrity of the embedding page.

Types 3 through 5 do not have any kind of support and type 4 and 5 encapsulation would be hard to pull off due to Spectre. User agents typically use a weaker variant of type 4 for their internal controls, such as the video and input elements, that does not protect confidentiality. The DOM and HTML standards provide type 1 and 2 encapsulation to web developers, and type 2 mostly due to Apple and Mozilla pushing rather hard for it. It might be worth providing an updated definition for the first two as we’ve come to understand them:

  1. Open shadow trees — no standardized web platform API provides access to nodes in an open shadow tree, except APIs that have been explicitly named and designed to do so (e.g., composedPath()). Nothing should be able to observe these shadow trees other than through those designated APIs (or “monkey patching”, i.e., modifying objects). Limited form of information hiding, but no integrity or confidentiality.
  2. Closed shadow trees — very similar to open shadow trees, except that designated APIs also do not get access to nodes in a closed shadow tree.

Type 2 encapsulation gives component developers control over what remains encapsulated and what is exposed. You need to take all your users into account and expose the best possible public API for them. At the same time, it protects you from folks taking a dependency on the guts of the component. Aspects you might want to refactor or add functionality to over time. This is much harder with type 1 encapsulation as there will be APIs that can reach into the details of your component and if users do so you cannot refactor it without updating all the callers.

Now, both type 1 and 2 encapsulation can be circumvented, e.g., by a script changing the attachShadow() method or mutating another builtin that the component has taken a dependency on. I.e., there is no integrity and as they run in the same origin there is no security boundary either. The limited form of information hiding is primarily a maintenance boon and a way to manage complexity. Maciej addresses this as well:

If the shadow DOM is exposed, then you have the following risks:

  1. A page using the component starts poking at the shadow DOM because it can — perhaps in a rarely used code path.
  2. The component is updated, unaware that the page is poking at its guts.
  3. Page adopts new version of component.
  4. Page breaks.
  5. Page author blames component author or rolls back to old version.

This is not good. Information hiding and hiding of implementation details are key aspects of encapsulation, and are good software engineering practices.

Heading levels

The HTML Standard contains an algorithm to compute heading levels and has for the past fifteen years or so, that’s fairly complex and not implemented anywhere. E.g., for the following fragment

<body>
 <h4>Apples</h4>
 <p>Apples are fruit.</p>
 <section>
  <div>
   <h2>Taste</h2>
   <p>They taste lovely.</p>
  </div>
  <h6>Sweet</h6>
  <p>Red apples are sweeter than green ones.</p>
  <h1>Color</h1>
  <p>Apples come in various colors.</p>
 </section>
</body>

the headings would be “Apples” (level 1), “Taste” (level 2), “Sweet” (level 3), “Color” (level 2). Determining the level of any given heading requires traversing through its previous siblings and their descendants, its parent and the previous siblings and descendants of that, et cetera. That is too much complexity and optimizing it with caches is evidently not deemed worth it for such a simple feature.

However, throwing out the entire feature and requiring everyone to use h1 through h6 forever, adjusting them accordingly based on the document they end up in, is not very appealing to me. So I’ve been trying to come up with an alternative algorithm that would allow folks to use h1 with sectioning elements exclusively while giving assistive technology the right information (default styling of h1 is already adjusted based on nesting depth).

The simpler algorithm only looks at ancestors for a given heading and effectively only does so for h1 (unless you use hgroup). This leaves the above example in the weird state it is in in today’s browsers, except that the h1 (“Color”) would become level 2. It does so to minimally impact existing documents which would usually use h1 only as a top-level element or per the somewhat-erroneous recommendation of the HTML Standard use it everywhere, but in that case it would dramatically improve the outcome.

I’m hopeful we can have a prototype of this in Firefox soon and eventually supplement it with a :heading/:heading(…) pseudo-class to provide additional benefits to folks to level headings correctly. Standards-wise much of this is being sorted in whatwg/html #3499 and various issues linked from there.