Doing random things over at
229 stories

IPv4, IPv6, and a sudden change in attitude

1 Share

A few years ago I wrote The World in Which IPv6 was a Good Design. I'm still proud of that article, but I thought I should update it a bit.

No, I'm not switching sides. IPv6 is just as far away from universal adoption, or being a "good design" for our world, as it was three years ago. But since then I co-founded a company that turned out to be accidentally based on the principles I outlined in that article. Or rather, from turning those principles upside-down.

In that article, I explored the overall history of networking and the considerations that led to IPv6. I'm not going to cover that ground again. Instead, I want to talk about attitude.

Internets, Interoperability, and Postel's Law

Did you ever wonder why "Internet" is capitalized?

When I first joined the Internet in the 1990s, I found some now-long-lost introductory tutorial. It talked about the difference between an internet (lowercase i) and the Internet (capital I). An internet is "any network that connects smaller networks together." The Internet is... well... it turns out that you don't need more than one internet. If you have two internets, it is nearly unavoidable that someone will soon figure out how to connect them together. All you need is one person to build that one link, and your two internets become one. By induction then, the Internet is the end result when you make it easy enough for a single motivated individual to join one internet to another, however badly.

Internets are fundamentally sloppy. No matter how many committees you might form, ultimately connections are made by individuals plugging things together. Those things might follow the specs, or not. They might follow those specs well, or badly. They might violate the specs because everybody else is also violating the specs and that's the only way to make anything work. The connections themselves might be fast or slow, or flakey, or only functional for a few minutes each day, or subject to amateur radio regulations, or worse. The endpoints might be high-powered servers, vending machines, toasters, or satellites, running any imaginable operating system. Only one thing's for sure: they all have bugs.

Which brings us to Postel's Law, which I always bring up when I write about networks. When I do, invariably there's a slew of responses trying to debate whether Postel's Law is "right," or "a good idea," as if it were just an idea and not a force of nature.

Postel's Law says simply this: be conservative in what you send, and liberal in what you accept. Try your best to correctly handle the bugs produced by the other end. The most successful network node is one that plans for every "impossible" corruption there might be in the input and does something sensible when it happens. (Sometimes, yes, "something sensible" is to throw an error.)

[Side note: Postel's Law doesn't apply in every situation. You probably don't want your compiler to auto-fix your syntax errors, unless your compiler is javascript or HTML, which, kidding aside, actually were designed to do this sort of auto-correction for Postel's Law reasons. But the law does apply in virtually every complex situation where you need to communicate effectively, including human conversations. The way I like to say it is, "It takes two to miscommunicate." A great listener, or a skilled speaker, can resolve a lot of conflicts.]

Postel's Law is the principle the Internet is based on. Not because Jon Postel was such a great salesperson and talked everyone into it, but because that is the only winning evolutionary strategy when internets are competing. Nature doesn't care what you think about Postel's Law, because the only Internet that happens will be the one that follows Postel's Law. Every other internet will, without exception, eventually be joined to The Internet by some goofball who does it wrong, but just well enough that it adds value, so that eventually nobody will be willing to break the connection. And then to maintain that connection will require further application of Postel's Law.

IPv6: a different attitude

If you've followed my writing, you might have seen me refer to IPv6 as "a second internet that not everyone is connected to." There's a lot wrapped up in that claim. Let's back up a bit.

In The World in Which IPv6 was a Good Design, I talked about the lofty design goals leading to IPv6: eliminate bus networks, get rid of MAC addresses, no more switches and hubs, no NATs, and so on. What I didn't realize at the time, which I now think is essential, is that these goals were a fundamental attitude shift compared to what went into IPv4 (and the earlier protocols that led to v4).

IPv4 evolved as a pragmatic way to build an internet out of a bunch of networks and machines that existed already. Postel's Law says you'd best deal with reality as it is, not as you wish it were, and so they did. When something didn't connect, someone hacked on it until it worked. Sloppy. Fits and starts, twine and duct tape. But most importantly, nobody really thought this whole mess would work as well as it turned out to work, or last as long as it turned out to last. Nobody knew, at the time, that whenever you start building internets, they always lead inexorably to The Internet.

These (mostly) same people, when they started to realize the monster they had created, got worried. They realized that 32-bit addresses, which they had originally thought would easily last for the lifetime of their little internet, were not even enough for one address per person in the world. They found out, not really to anyone's surprise, that Postel's Law, unyielding as it may be, is absolutely a maintenance nightmare. They thought they'd better hurry up and fix it all, before this very popular Internet they had created, which had become a valuable, global, essential service, suddenly came crashing down and it would all be their fault.

[Spoiler: it never did come crashing down. Well, not permanently. There were and are still short-lived flare-ups every now and then, but a few dedicated souls hack it back together, and so it goes.]

IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn't make all the same mistakes. It would have fewer hacks. And we'd upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days.

We can hardly blame people for believing this would work. Even the term "Second System Effect" was only about 20 years old at the time, and not universally known. Every previous Internet upgrade had gone fine. Nobody had built such a big internet before, with so much Postel's Law, with such a variety of users, vendors, and systems, so nobody knew it would be different.

Well, here we are 25 years later, and not much has changed. If we were feeling snarky, we could perhaps describe IPv6 as "the String Theory of networking": a decades-long boondoggle that attracts True Believers, gets you flamed intensely if you question the doctrine, and which is notable mainly for how much progress it has held back.

Luckily we are not feeling snarky.

Two Internets?

There are, of course, still no exceptions to the rule that if you build any internet, it will inevitably (and usually quickly) become connected to The Internet.

I wasn't sitting there when it happened, but it's likely the very first IPv6 node ran on a machine that was also connected to IPv4, if only so someone could telnet to it for debugging. Today, even "pure IPv6" nodes are almost certainly connected to a network that, if configured correctly, can find a way to any IPv4 node, and vice versa. It might not be pretty, it might involve a lot of proxies, NATs, bridges, and firewalls. But it's all connected.

In that sense, there is still just one Internet. It's the big one. Since day 1, The Internet has never spoken just one protocol; it has always been a hairy mess of routers, bridges, and gateways, running many protocols at many layers. IPv6 is one of them.

What makes IPv6 special is that its proponents are not content for it to be an internet that connects to The Internet. No! It's the chosen one. Its destiny is to be The Internet. As a result, we don't only have bridges and gateways to join the IPv6 internets and the IPv4 internet (although we do).

Instead, IPv6 wants to eventually run directly on every node. End users have been, uh, rather unwilling to give up IPv4, so for now, every node has that too. As a result, machines are often joined directly to what I call "two competing internets" --- the IPv4 one and the IPv6 one.

Okay, at this point our terminology has become very confusing. Sorry. But all this leads to the question I know you want me to answer: Which internet is better!?


I'll get to that, but first we need to revisit what I bravely called Avery's Laws of Wifi Reliability, which are not laws, were surely invented by someone else (since they're mostly a paraphrasing of a trivial subset of CAP theorem), and as it turns out, apply to more than just wifi. Oops. I guess the name is wrong in almost every possible way. Still, they're pretty good guidelines.

Let's refresh:

  • Rule #1: if you have two wifi router brands that work with 90% of client devices, and your device has a problem with one of them, replacing the wifi router brand will fix the problem 90% of the time. Thus, an ISP offering both wifi routers has a [1 - (10% x 10%)] = 99% chance of eventual success.

  • Rule #2: if you're running two wifi routers at once (say, a primary router and an extender), and both of them work "correctly" for about 90% of the time each day, the chance that your network has no problems all day is 81%.

In Rule #1, which I call "a OR b", success compounds and failure rates drop.

In Rule #2, which I call "a AND b", failure compounds and success drops.

But wait, didn't we add redundancy in both cases?

Depending how many distributed systems you've had to build, this is either really obvious or really mind blowing. Why did the success rate jump to 99% in the first scenario but drop to 81% in the second? What's the difference? And... which one of those cases is like IPv6?


Or we can ask that question another way. Why are there so many web pages that advise you to solve your connectivity problem by disabling IPv6?

Because automatic failover is a very hard problem.

Let's keep things simple. IPv4 is one way to connect client A to server X, and IPv6 is a second way. It's similar to buying redundant home IPv4 connections from, say, a cable and a DSL provider and plugging them into the same computer. Either way, you have two independent connections to The Internet.

When you have two connections, you must choose between them. Here are some factors you can consider:

  • Which one even offers a path from A to X? (If X doesn't have an IPv6 address, for example, then IPv6 won't be an option.)

  • Which one gives the shortest paths from A to X and from X to A? (You could evaluate this using hopcount or latency, for example, like in my old netselect program.)

  • Which path has the most bandwidth?

  • Which path is most expensive?

  • Which path is most congested right now?

  • Which path drops out least often? (A rebooted NAT will drop a TCP connection on IPv4. But IPv6 routes change more frequently.)

  • Which one has buggy firewalls or NATs in the way? Do they completely block it (easy) or just act strangely (hard)?

  • Which one blocks certain UDP or TCP ports, intentionally or unintentionally?

  • Which one is misconfigured to block certain ICMP packets so that PMTU discovery (always or sometimes) doesn't work with some or all hosts?

  • Which one blocks certain kinds of packet fragmentation?

A common heuristic called "Happy Eyeballs" is one way to choose between routes, but it covers only a few of those criteria.

The truth is, it's extremely hard to answer all those questions, and even if you can, the answers are different for every combination of A and X, and they change over time. Operating systems, web browsers, and apps, even if they implement Happy Eyeballs or something equivalent, tend to be pretty bad at detecting all these edge cases. And every app has to do it separately!

My claim is that the "choose between two internets" problem is the same as the "choose between two flakey wifi routers on the same SSID" problem (Rule #2). All is well as long as both internets (or both wifi routers) are working perfectly. As soon as one is acting weird, your overall results are going to be weird.

...and the Internet always acts weird, because of the tyranny of Postel's Law. Debugging the Internet is a full time job.

...and now there are two internets, with a surprisingly low level of overlap, so your ISP has to build and debug both.

...and every OS vendor has to debug both protocol implementations, which is more than twice as much code.

...and every app vendor has to test with both IPv4 and IPv6, which of course they don't.

We should not be surprised that the combined system is less reliable.

The dream

IPv6 proponents know all this, whether rationally or intuitively or at least empirically. The failure rate of two wonky internets joined together is higher than the failure rate of either wonky internet alone.

This leads them to the same conclusion you've heard so many times: we should just kill one of the internets, so we can spend our time making the one remaining internet less wonky, instead of dividing our effort between the two. Oh, and, obviously the one we kill will be IPv4, thanks.

They're not wrong! It would be a lot easier to debug with just one internet, and you know, if we all had to agree on one, IPv6 is probably the better choice.

But... we don't all have to agree on one, because of the awesome unstoppable terribleness that is Postel's Law. Nobody can declare one internet or the other to be officially dead, because the only thing we know for sure about internets is that they always combine to make The Internet. Someone might try to unplug IPv4 or IPv6, but some other jerk will plug it right back in.

Purity cannot ever be achieved at this kind of scale. If you need purity for your network to be reliable, then you have an unsolvable problem.

The workaround

One thing we can do, though, is build better heuristics.

Ok, actually we have to do better than that, because it turns out that correctly choosing between the two internets for each connection, at the start of that connection, is not possible or good enough. Problems like PMTU, fragmentation, NAT resets, and routing changes can interrupt a connection partway through and cause poor performance or dropouts.

I want to go back to a side note I left near the end of The World in Which IPv6 was a Good Design: mobile IP. That is, the ability for your connections to keep going even if you hop between IP addresses. If you had IP mobility, then you could migrate connections between your two internets in real time, based on live quality feedback. You could send the same packets over both links and see which ones work better. If you picked one link and it suddenly stopped, you could retransmit packets on the other link and pick up where you left off. Your precise heuristic wouldn't even matter that much, as long as it tries both ways eventually.

If you had IP mobility, then you could convert the "a AND b" scenario (failure compounds) into the "a OR b" scenario (success compounds).

And you know what, forget about IPv4 and IPv6. The same tricks would work with that redundant cable + DSL setup we mentioned above. Or a phone with both wifi and LTE. Or, given a fancy enough wifi client chipset, smoothly switching between multiple unrelated wifi routers.

This is what we do, in a small way, with Tailscale's VPN connections. We try all your Internet links, IPv4 and IPv6, UDP and TCP, relayed and peer-to-peer. We made mobile IP a real thing, if only on your private network for now. And what do you know, the math works. Tailscale with two networks is more reliable than Tailscale with one network.

Now, can it work for the whole Internet?

This article was originally posted to the Tailscale blog

Read the whole story
11 days ago
Share this story

On Liberating My Smartwatch From Cloud Services

1 Comment and 3 Shares

I’ve often said that if we convince ourselves that technology is magic, we risk becoming hostages to it. Just recently, I had a brush with this fate, but happily, I was saved by open source.

At the time of writing, Garmin is suffering from a massive ransomware attack. I also happen to be a user of the Garmin Instinct watch. I’m very happy with it, and in many ways, it’s magical how much capability is packed into such a tiny package.

I also happen to have a hobby of paddling the outrigger canoe:

I consider the GPS watch to be an indispensable piece of safety gear, especially for the boat’s steer, because it’s hard to judge your water speed when you’re more than a few hundred meters from land. If you get stuck in a bad current, without situational awareness you could end up swept out to sea or worse.

The water currents around Singapore can be extreme. When the tides change, the South China Sea eventually finds its way to the Andaman Sea through the Singapore Strait, causing treacherous flows of current that shift over time. Thus, after every paddle, I upload my GPS data to the Garmin Connect cloud and review the route, in part to note dangerous changes in the ebb-and-flow patterns of currents.

While it’s a clear and present privacy risk to upload such data to the Garmin cloud, we’re all familiar with the trade-off: there’s only 24 hours in the day to worry about things, and the service just worked so well.

Until yesterday.

We had just wrapped up a paddle with particularly unusual currents, and my paddling partner wanted to know our speeds at a few of the tricky spots. I went to retrieve the data and…well, I found out that Garmin was under attack.

Garmin was being held hostage, and transitively, so was access to my paddling data: a small facet of my life had become a hostage to technology.

A bunch of my paddling friends recommended I try Strava. The good news is Garmin allows data files to be retrieved off of the Instinct watch, for upload to third-party services. All you have to do is plug the watch into a regular USB port, and it shows up as a mass storage device.

The bad news is as I tried to create an account on Strava, all sorts of warning bells went off. The website is full of dark patterns, and when I clicked to deny Strava access to my health-related data, I was met with this tricky series dialog boxes:

Click “Decline”…

Click “Deny Permission”…

Click “OK”…

Three clicks to opt out, and if I wasn’t paying attention and just kept clicking the bottom box, I would have opted-in by accident. After this, I was greeted by a creepy list of people to follow (how do they know so much about me from just an email?), and then there’s a tricky dialog box that, if answered incorrectly, routes you to a spot to enter credit card information as part of your “free trial”.

Since Garmin at least made money by selling me a $200+ piece of hardware, collecting my health data is just icing on the cake; for Strava, my health data is the cake. It’s pretty clear to me that Strava made a pitch to its investors that they’ll make fat returns by monetizing my private data, including my health information.

This is a hard no for me. Instead of liberating myself from a hostage situation, going from Garmin to Strava would be like stepping out of the frying pan and directly into the fire.

So, even though this was a busy afternoon … I’m scheduled to paddle again the day after tomorrow, and it would be great to have my boat speed analytics before then. Plus, I was sufficiently miffed by the Strava experience that I couldn’t help but start searching around to see if I couldn’t cobble together my own privacy-protecting alternative.

I was very pleased to discovered an open-source utility called gpsbabel (thank you gpsbabel! I donated!) that can unpack Garmin’s semi-(?)proprietary “.FIT” file format into the interoperable “.GPX” format. From there, I was able to cobble together bits and pieces of XML parsing code and merge it with OpenStreetMaps via the Folium API to create custom maps of my data.

Even with getting “lost” on a detour of trying to use the Google Maps API that left an awful “for development only” watermark on all my map tiles, this only took an evening — it wasn’t the best possible use of my time all things considered, but it was mostly a matter of finding the right open-source pieces and gluing them together with Python (fwiw, Python is a great glue, but a terrible structural material. Do not build skyscrapers out of Python). The code quality is pretty crap, but Python allows that, and it gets the job done. Given those caveats, one could use it as a starting point for something better.

Now that I have full control over my data, I’m able to visualize it in ways that make sense to me. For example, I’ve plotted my speed as a heat map map over the course, with circles proportional to the speed at that moment, and a hover-text that shows my instantaneous speed and heart rate:

It’s exactly the data I need, in the format that I want; no more, and no less. Plus, the output is a single html file that I can share directly with nothing more than a simple link. No analytics, no cookies. Just the data I’ve chosen to share with you.

Here’s a snippet of the code that I use to plot the map data:

Like I said, not the best quality code, but it works, and it was quick to write.

Even better yet, I’m no longer uploading my position or fitness data to the cloud — there is a certain intangible satisfaction in “going dark” for yet another surveillance leakage point in my life, without any compromise in quality or convenience.

It’s also an interesting meta-story about how healthy and vibrant the open-source ecosystem is today. When the Garmin cloud fell, I was able to replace the most important functions of it in just an afternoon by cutting and pasting together various open source frameworks.

The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services.

Edits: added details on how to take data off the watch, and noted the watch’s price.

Read the whole story
13 days ago
Share this story
1 public comment
16 days ago
"The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services"
Earth, Sol system, Western spiral arm

Trace Together Token: Teardown and Design Overview

1 Share

On 19 June, GovTech Singapore invited four members of the community to come and inspect their new TraceTogether Token. This token removes the need to carry a phone at all times, and is designed to help both those who do not have a smart device capable of running TraceTogether well, including those using older Android devices, non-smartphones, and iOS users. I was among the group, which also consisted of Roland Turner, Harish Pillay, and Andrew "bunnie" Huang, who were given the opportunity to see the first public revision of hardware. In this post I will discuss the goal of the token, give some overview of the hardware, compare it with the app version of TraceTogether, and comment on the protocol changes.

Goal of the TraceTogether Token

The Trace Together Token is a dedicated hardware device that makes it easier to inform people if they may have come in prolonged contact with a person who subsequently was diagnosed with COVID-19. This is its sole purpose.

It is a hardware implementation of the app that GovTech previously developed, and has been installed over half a million times. The TraceTogether Token builds on the app and simplifies its usage: Throw it in a handbag or attach it to a keychain and forget about it while it does its thing.

Comparison With Phone Apps

I won't do an in-depth analysis of the Trace Together app. You can read an independent analysis that Frank Liauw put together to learn more. I worry more about the amount of spying that other popular apps on my phone do. For example, every banking, taxi booking app, and food delivery service on my phone has uploaded some amount of data to Facebook, Google, and a company called AppFlyer.

TraceTogether uses a protocol known as BlueTrace, and there are several problems with the protocol that make it challenging to work with.

First, antenna designs vary. As part of the Bluetooth spec, devices can report the amount of power they are currently using to broadcast:

From the Supplement to Bluetooth Core Specification, Part A

BlueTrace includes this information in the advertising beacon, but what does the number mean? Intuitively it will tell you how far away a device is, because you can correlate the strength of the received signal with the broadcast power: If they said they were loudly broadcasting but you received a weak signal, they must be far away. However, antenna designs vary, and just like with humans, one phone's "loud" is another phone's "whisper".

There's also the issue of charging. Phones must be charged at least daily. With mobile payment becoming more common, people are incentivized to keep their devices charged and running, however it's still very common to be hobbled by a phone simply running out of battery.

On Android, many device manufacturers are very aggressive when it comes to terminating background processes. TraceTogether necessarily must always run in the background, but your phone might not realize that and could terminate the process anyway. Anyone who has found themselves unable to receive WhatsApp messages without opening up the app will have experienced something similar. Additionally, many older Android devices are not compatible with the approach of TraceTogether, where a device can act as a Bluetooth Central (i.e. "host") or a Peripheral (i.e. "device").

As an aside, it was refreshing to hear Minister Vivian Balakrishnan using the words "Peripheral" and "Central" when discussing issues pertaining to older devices. These are the technical terms for the roles Bluetooth devices play, and he correctly pointed out that devices that do not support Bluetooth Low Energy (BLE) are incompatible with Bluetooth-based contact tracing.

Then there's the iOS problem. Apple does not let apps use Bluetooth in the background, so users must always run TraceTogether. The easy answer here is to use the Apple-sanctioned tracing protocol, however this protocol is not compatible with BlueTrace and makes very different assumptions about how contact tracing should work. It also assumes that everyone has a modern device, which leaves out a significant portion of the population that does not have the latest hardware.

Hardware Overview

The hardware isn't ready yet, and we aren't yet allowed to share photos of the device because supplier contracts are still being worked out. Still, we could identify the major components well enough that I put together this block diagram:

Block Diagram of the Trace Together Token

There are several interesting parts to note about this diagram:

  • There is no battery charger – it's designed to run for several months on a single battery.
  • There is a realtime clock with its own battery meaning time must be important in the new protocol
  • They use a powered antenna to improve performance
  • The entire system must be extremely low power

The last point means it is unlikely that they hid a GPS tracker, WiFi radio, or cellular modem in this device. The battery is a small coin cell, which would only last a few hours if it were receiving GPS or communicating via WiFi. Additionally, there are no additional sensors such as an accelerometer, pressure sensor, or microphone.

All of the major ICs had obfuscated markings so we couldn't identify part numbers. However, the block diagram I sketched above happens to look very similar to the block diagram for Simmel, which is an open-source tracking token put together by Bunnie and myself:

Block diagram of Simmel

There are a few notable differences:

  • Simmel uses a PCB antenna because it was a proof-of-concept device, so we were willing to accept reduced range.
  • There was an experiment in using Near Ultrasound as an alternative to Bluetooth for contact tracing, under the theory that it would be lower power. Ultrasound turns out to be too directional, which is why bats can use it for echolocation.
  • TraceTogether uses a separate realtime clock, whereas Simmel currently relies on a stupendous hack to save time in case of a crash, meaning it is less accurate.
  • Simmel runs CircutPython in an effort to make early development easier, with the intention of rewriting it in a lower level language later on. The TraceTogether Token presumably runs its own stack.

Despite these minor differences, there are many similarities:

  • Both run on a non-rechargeable battery
  • Both use a voltage regulator to stabilize the battery voltage
  • Both have an external flash memory for storing contacts
  • Both are designed to last for months without user interaction, and be forgotten about until they are needed
  • Both rely on Bluetooth – specifically Bluetooth LE – for interaction
  • Neither contains any additional sensors, since those would add cost and power consumption

Therefore, while the exact hardware details of the Trace Together Token are still obfuscated, I can safely say that it is conceptually extremely similar to Simmel. Any questions you have about the hardware approach GovTech is taking can be answered by looking at the Simmel hardware repository.

BlueTrace Protocol Changes

One of the challenges we ran into when developing Simmel was the power budget. The nRF52833 part we used requires a lot of power to listen to Bluetooth:

The nRF52833 reference manual on power consumption (Receiving)

BlueTrace advises that a device listen 20% of the time, which means it's receiving 20% of the time. In testing, we were observing about 5.9 mA current draw when receiving data, compared to 0.012 mA when idle. Furthermore, BlueTrace recommends that a device must transmit about 90% of the time, which is unfortunate because transmission has similar numbers:

The nRF52833 reference manual on power consumption (Transmitting)

For a mobile phone these numbers are tiny, but when running off of a pair of AAA batteries, a continuous drain of 2 mA means a 1200 mAh AAA battery will be drained in 1200 mAh / 24 hours / 2 mA = 25 days.

Part of the problem involves the BlueTrace approach, which follows a traditional Peripheral/Central approach: A Central advertises a GATT Service of "BlueTrace", allowing another device to connect and read the current temporary token / deposit its own temporary token. This is required in order to make a two-way connection in BLE, so it is an extremely common approach. Additionally, ensuring the connection is two-way also ensures that the devices are close enough to matter from an epidemiological perspective.

GovTech must have also run into these issues, because the BlueTrace protocol is being modified for use with the Trace Together Token. Instead of forming a two-way connection, devices now simply broadcast their temporary tokens. The interval for broadcast is much longer, and the scanning interval is much shorter, meaning the device can spend most of its time in a low-power suspend state.


Simmel has 2 MB of flash memory. Each BlueTrace Temporary Token payload is 160 bytes. That means Simmel can store (2^21) bytes of flash / 160 bytes per payload / 21 days =  624 records per day. We weren't sure how many daily records we could expect, and the number surely was very low during the Circuit Breaker. GovTech must have decided that this was too large to store on an embedded device, so for the Trace Together Token they reduced the size of the critical part of the payload. In the new protocol, this payload is much smaller, although we don't yet have exact numbers.

Finally, there's the issue of temporary tokens. In the current Trace Together app, a batch of tokens is downloaded from the Ministry of Health every few days. With Simmel, we assumed we could simply download a few months' worth of tokens and store them on the flash memory, consuming them as time passed. GovTech took a different approach with the Trace Together Token, and instead derive the temporary token from a unique ID hashed together with the current time. This approach is similar to how the European DP3T protocol works.

As a result of all of these changes, many of the challenges we faced when designing Simmel are avoided with the Trace Together Token: They can reduce power by spending less time transmitting and receiving; they don't have to use so much storage to keep track of interactions; and they don't need to store several months' worth of temporary tokens. Overall these changes are exactly what are needed for an implementation of the protocol for use in a hardware token.


The approach taken by GovTech when designing the Trace Together Token hardware is sound. The device accomplishes the goals it set out to, while preserving the privacy of the owner. Like the Trace Together app, the Trace Together Token cannot be used to identify the owner merely by looking at Bluetooth broadcasts – the only entity that can correlate logged data to a human is the Ministry of Health.

During this session we didn't have access to the software, because as a tracing beacon there's not much of a user interface beyond a blinking LED. Also, they weren't ready to let us attach debug probes, meaning we can't draw any conclusions about the software itself. However, given the PCB design and the system's power requirements there isn't much they could hide.

Overall I'm pleased with the direction they are going in with the Trace Together Token, and look forward to getting one of my very own.

Read the whole story
47 days ago
Share this story

On Contact Tracing and Hardware Tokens


Early in the COVID-19 pandemic, I was tapped by the European Commission to develop a privacy-protecting contact tracing token, which you can read more about at the Simmel project home page. And very recently, Singapore has announced the deployment of a TraceTogether token. As part of their launch, I was invited to participate in a review of their solution. The urgency of COVID-19 and the essential challenges of building supply chains means we are now in the position of bolting wheels on a plane as it rolls down the runway. As with many issues involving privacy and technology, this is a complicated and nuanced situation that cannot be easily digested into a series of tweets. Thus, over the coming weeks I hope to offer you my insights in the form of short essays, which I will post here.

Since I was only able to spend an hour with the TraceTogether token so far, I’ll spend most of this essay setting up the background I’ll be using to evaluate the token.

Contact Tracing

The basic idea behind contact tracing is simple: if you get sick, identify your close contacts, and test them to see if they are also sick. If you do this fast enough, you can contain COVID-19, and most of society continues to function as normal.

However, from an implementation standpoint, there are some subtleties that I struggled to wrap my head around. Dr. Vivian Balakrishnan, the Minister-in-charge of the Smart Nation Initiative, briefly stated at our meeting on Friday that the Apple/Google Exposure Notification system did not reveal the “graph”. In order to help myself understand the epidemiological significance of extracting the contact graph, I drew some diagrams to illustrate contact tracing scenarios.

Let’s start by looking at a very simple contact tracing scenario.

In the diagram above, two individuals are shown, Person 1 and Person 2. We start Day 1 with Person 1 already infectious yet only mildly symptomatic. Person 1 comes in contact with Person 2 around mid-day. Person 2 then incubates the virus for a day, and becomes infectious late on Day 2. Person 2 may not have any symptoms at this time. At some future date, Person 2 infects two more people. In this simple example, it is easy to see that if we can isolate Person 2 early enough, we could prevent at least two future exposures to the virus.

Now let’s take a look at a more complicated COVID-19 spread scenario with no contact tracing. Let’s continue to assume Person 1 is a carrier with mild to no symptoms but is infectious: a so-called “super spreader”.

The above graphic depicts the timelines of 8 people over a span of five days with no contact tracing. Person 1 is ultimately responsible for the infection of several people over a period of a few days. Observe that the incubation periods are not identical for every individual; it will take a different amount of time for every person to incubate the virus and become infectious. Furthermore, the onset of symptoms is not strongly correlated with infectiousness.

Now let’s add contact tracing to this graph.

The graphic above illustrates the same scenario as before, but with the “platonic ideal” of contact tracing and isolation. In this case, Person 4 shows symptoms, seeks testing, and is confirmed positive early on Day 4; their contacts are isolated, and dozens of colleagues and friends are spared from future infection. Significantly, digging through the graph of contacts also allows one to discover a shared contact of Person 4 and Person 2, thus revealing that Person 1 is the originating asymptomatic carrier.

There is a subtle distinction between “contact tracing” and “contact notification”. Apple/Google’s “Exposure Notification” system only perform notifications to the immediate contacts of an infected person. The significance of this subtlety is hinted by the fact that the protocol was originally named a “Privacy Preserving Contact Tracing Protocol”, but renamed to the more accurate description of “Exposure Notification” in late April.

To better understand the limitations of exposure notification, let’s consider the same scenario as above, but instead of tracing out the entire graph, we only notify the immediate contacts of the first person to show definite symptoms – that is, Person 4.

With exposure notification, carriers with mild to no symptoms such as Person 1 would get misleading notifications that they were in contact with a person who tested positive for COVID-19, when in fact, it was actually the case that Person 1 gave COVID-19 to Person 4. In this case, Person 1 – who feels fine but is actually infectious – will continue about their daily life, except for the curiosity that everyone around them seems to be testing positive for COVID-19. As a result, some continued infections are unavoidable. Furthermore, Person 2 is a hidden node from Person 4, as Person 2 is not within Person 4’s set of immediate notification contacts.

In a nutshell, Exposure Notification alone cannot determine causality of an infection. A full contact “graph”, on the other hand, can discover carriers with mild to no symptoms. Furthermore, it has been well-established that a significant fraction of COVID-19 infections show mild or no symptoms for extended periods of time – these are not “rare” events. These individuals are infectious but are well enough to walk briskly through crowded metro stations and eat at hawker stalls. Thus, in the “local context” of Singapore, asymptomatic carriers can seed dozens of clusters in a matter of days if not hours, unlike less dense countries like the US, where infectious individuals may come in contact with only a handful of people on any given day.

The inability to quickly identify and isolate mildly symptomatic super-spreaders motivates the development of the local TraceTogether solution, which unlocks the potential for “full graph” contact tracing.

On Privacy and Contact Tracing

Of course, the privacy implications of full-graph contact tracing are profound. Also profound are the potential health risks and loss of life absent full-graph contact tracing. There’s also a proven solution for containing COVID-19 that involves no sacrifice of privacy: an extended Circuit-Breaker style lockdown. Of course, this comes at the price of the economy.

Of the three elements of privacy, health, or economy, it seems we can only pick two. There is a separate and important debate about which two we should prioritize, but that is beyond the context of this essay. For the purpose of this discussion, let’s assume contact tracing will be implemented. In this case, it is incumbent upon technologists like us to try and come up with a compromise that can mitigate the privacy impact while facilitating public policy.

Back in early April, Sean ‘xobs’ Cross and I were contacted by the European Commission’s NGI program via NLnet to propose a privacy-protecting contact tracing hardware token. The resulting proposal is called “Simmel”. While not perfect, the salient privacy features of Simmel include:

  1. Strong isolation of user data. By disallowing sensor fusion with the smartphone, there is zero risk of GPS or other geolocation data being leaked. It is also much harder to do metadata-based attacks against user privacy.
  2. Citizens are firmly in control. Users are the physical keeper of their contact data; no third-party servers are involved, until they volunteer their data to an authority by surrendering the physical token. This means in an extreme case, a user has the option of physically destroying their token to erase their history.
  3. Citizens can temporarily opt-out. By simply twisting the cap of the token, users can power the token down at any time, thus creating a gap in their trace data (note: this feature is not present on the first prototypes).
  4. Randomized broadcast data. This is a protocol-level feature which we recommend to defeat the ability for third parties (perhaps an advertising agency or a hostile government) from piggy backing on the protocol to aggregate user locations for commercial or strategic benefit.

Why a Hardware Token?

But why a hardware token? Isn’t an app just better in so many ways?

At our session on Friday, the TraceTogether token team stated that Singapore needs hardware tokens to better serve two groups: the underprivileged, and iPhone users. The underprivileged can’t afford to buy a smartphone; and iPhone users can only run Apple-approved protocols, such as their Exposure Notification service (which does not enable full contact tracing). In other words, iPhone users, like the underprivileged, also don’t own a smartphone; rather, they’ve bought a phone that can only be used for Apple-sanctioned activities.

Our Simmel proposal makes it clear that I’m a fan of a hardware token, but for reasons of privacy. It turns out that apps, and smartphones in general, are bad for user privacy. If you genuinely care about privacy, you would leave your smartphone at home. The table below helps to illustrate the point. A red X indicates a known plausible infraction of privacy for a given device scenario.

The tracing token (as proposed by Singapore) can reveal your location and identity to the government. Nominally, this happens at the point where you surrender your token to the health authorities. However, in theory, the government could deploy tens of thousands of TraceTogether receivers around the island to record the movement of your token in real-time. While this is problematic, it’s relevant to compare this against your smartphone, which typically broadcasts a range of unique, unencrypted IDs, ranging from the IMEI to the wifi MAC address. Because the smartphone’s identifiers are not anonymized by default, they are potentially usable by anyone – not just the government – to identify you and your approximate location. Thus, for better or for worse, the design of the TraceTogether token does not meaningfully change the status quo as far as “big infrastructure” attacks on individual privacy.

Significantly, the tracing token employs an anonymization scheme for the broadcast IDs, so it should not be able to reveal anything about your location or identity to third parties – only to the government. Contrast this to the SafeEntry ID card scanner, where you hand over your ID card to staff at SafeEntry kiosks. This is an arguably less secure solution, as the staff member has an opportunity to read your private details (which includes your home address) while scanning your ID card, hence the boxes are red under “location” and “identity”.

Going back to the smartphone, “typical apps” – say, Facebook, Pokemon Go, Grab, TikTok, Maps – are often installed with most permissions enabled. Such a phone actively and routinely discloses your location, media, phone calls, microphones, contacts, and NFC (used for contactless payment and content beaming) data to a wide variety of providers. Although each provider claims to “anonymize” your data, it has been well-established that so much data is being published that it is virtually a push of a button to de-anonymize that data. Furthermore, your data is subject to surveillance by several other governments, thanks to the broad power of governments around the world to lawfully extract data from local service providers. This is not to mention the ever-present risk of malicious actors, exploits, or deceptive UI techniques to convince, dupe, or coerce you to disclose your data.

Let’s say you’re quite paranoid, and you cleverly put your iPhone into airplane mode most of the time. Nothing to worry about, right? Wrong. For example, in airplane mode, the iPhone still runs its GPS receiver and NFC. An independent analysis I’ve made of the iPhone also reveals occasional, unexplained blips on the wifi interface.

To summarize, here are the core arguments for why a hardware token offers stronger privacy protections than an app:

No Sensor Fusion

The data revealed by a hardware token is strongly limited by its inability to perform “sensor fusion” with a smartphone-like sensor suite. And even though I was only able to spend an hour with the device, I can say with a high degree of confidence that the TraceTogether token has little to no capability beyond the requisite BLE radio. Why do I say this? Because physics and economics:

Physics: more radios and sensors would draw more power. Ever notice how your phone’s battery life is shorter if location services are on? If the token is to last several months on such a tiny battery, there simply is not enough power available to operate much more than the advertised BLE functions.
Economics: more electronics means more cost. The publicly disclosed tender offering places a cap on the value of parts at S$20, and it essentially has to be less than that because the producer must also bear their development cost out of the tender. There is little room for extraneous sensors or radios within that economic envelope.

Above: the battery used in the TraceTogether token. It has a capacity of 1000mAh. The battery in your smartphone has a capacity of around 3x of this, and requires daily charging.

The economics argument is weaker than the physics argument, because the government could always prepare a limited number of “special” tokens to track select individuals at an arbitrary cost. However, the physics argument still stands – no amount of money invested by the government can break the laws of physics. If Singapore could develop a mass-manufacturable battery that can power a smartphone sensor suite for months in that form factor – well, let’s just say the world would be a very different place.

Citizen Hegemony over Contact History

Assuming that the final TraceTogether token doesn’t provide a method to repurpose the Bluetooth Low-Energy (BLE) radio for data readout (and this is something we hope to confirm in a future hackathon), citizens have absolute hegemony over their contact history data, at least until they surrender it in a contact tracing event.

As a result the government is, perhaps inadvertently, empowering citizens to rebel against the TraceTogether system: one can always crush their token and “opt-out” of the system (but please remove the battery first, otherwise you may burn down your flat). Or perhaps more subtly, you can “forget your token at home”, or carry it in a metallized pouch to block its signal. The physical embodiment of the token also means that once the COVID-19 pandemic is under control, destroying the token definitively destroys the data within it – unlike an app, where too often uninstalling the app simply means an icon is removed from your screen, but some data is still retained as a file somewhere on the device.

In other words, a physical token means that an earnest conversation about privacy can continue in parallel with the collection of contact tracing data. So even if you are not sure about the benefit of TraceTogether today, carrying the token allows you to defer the final decision of whether to trust the government until the point where you are requested to surrender your token for contact trace extraction.

If the government gets caught scattering BLE receivers around the island, or an errant token is found containing suspicious circuitry, the government stands to lose not just the trust of the people, but also access to full-graph contact tracing as citizens and residents dispose of tokens en masse. This restores a certain balance of power, where the government can and will be held accountable to its social contract, even as we amass contact tracing data together as a whole.

Next Steps

When I was tapped to independently review the TraceTogether token, I told the government that I would hold no punches – and surprisingly, they still invited me to the introductory session last Friday.

This essay framed the context I will use to evaluate the token. “Exposure notification” is not sufficient to isolate mildly symptomatic carriers of COVID-19, whereas “full graph” contact tracing may be able to make some headway against this problem. The good news is that the introduction of a physically embodied hardware token presents a safer opportunity to continue the debate on privacy while simultaneously improving the collection of contact tracing data. Ultimately, deployment of a hardware token system relies upon the compliance of citizens, and thus it is up to our government to maintain or earn our trust to manage our nation’s best interests throughout this pandemic.

I look forward to future hackathons where we can really dig into what’s running inside the TraceTogether token. Until then, stay safe, stay home when you can, and when you must go outside, wear your mask!

PS: You should also check out Sean ‘xobs’ Cross’ teardown of the TraceTogether token!

Read the whole story
47 days ago
47 days ago
Share this story

Black Tech for Black Lives: Portland founders among those imploring tech to take a stand

1 Share

Among the names of 150 Black tech leaders imploring technology companies and startups to take a stand against systemic racism — through a newly launched effort called Black Tech for Black Lives — a couple of names immediately stood out. That’s because they were Portland founders Stephen Green, founder of PitchBlack, and Lindsey Murphy, founder of The Fab Lab.

We are a collective of Black tech entrepreneurs, investors, creatives, changemakers, and workers, united to use our social, political, and economic capital for the advancement of our communities. We commit to acting in solidarity with those leaders working to create a more just world.

Tech is complicit. We as Black people in tech have a unique position and opportunity to respond to violence against Black people’s bodies. While we’re proximate to the pain, we largely avoid its most brutal physical outcomes. But we, too, feel the blows. We carry the scars on our psyches and hearts as our voices go largely unheard in the workplace and beyond.

As Portland writer Taylor Hatmaker highlights in TechCrunch:

The effort, called “Black Tech for Black Lives,” pulls together a set of specific, actionable commitments intended to “support frontline leaders working to create a more just world.” The pledge is designed to elevate Bay Area community leaders working in tech’s epicenter on specific policy goals regarding issues like policing reform, local elections and by hiring and supporting more Black talent in tech.

To read more or to join as a cosigner or an ally, visit Black Tech for Black Lives.

Read the whole story
62 days ago
Share this story

I started this project in 2015 with cute animals doing funny...


I started this project in 2015 with cute animals doing funny sounding new economy jobs. As it grew it added visionaries, disruptors and hackers. Then opportunists and con men. Violent billionaire dictator is not a random result here; it’s a parallel outcome of move-fast-and-break-things culture — an idea that only works for people who are so privileged and insulated that they have little to lose by plowing ahead without thinking about the consequences. 

Stop idolizing a few jackasses that won the VC lottery and listen to marginalized people instead. Have the hard conversations with our kids that so many of our parents avoided. 

Read the whole story
67 days ago
Share this story
Next Page of Stories