Doing random things over at
231 stories

Fannie Mae: Mortgage Serious Delinquency Rate Increased in July

1 Share
Fannie Mae reported that the Single-Family Serious Delinquency increased to 3.24% in July, from 2.65% in June. The serious delinquency rate is up from 0.67% in July 2019.

This is the highest serious delinquency rate since December 2012.

These are mortgage loans that are "three monthly payments or more past due or in foreclosure".

The Fannie Mae serious delinquency rate peaked in February 2010 at 5.59%.

Fannie Freddie Seriously Delinquent RateClick on graph for larger image

By vintage, for loans made in 2004 or earlier (2% of portfolio), 5.57% are seriously delinquent (up from 5.00% in May). For loans made in 2005 through 2008 (3% of portfolio), 9.36% are seriously delinquent (up from 8.37%), For recent loans, originated in 2009 through 2018 (95% of portfolio), 2.79% are seriously delinquent (up from 2.21%). So Fannie is still working through a few poor performing loans from the bubble years.

Mortgages in forbearance are counted as delinquent in this monthly report, but they will not be reported to the credit bureaus.

This is very different from the increase in delinquencies following the housing bubble.   Lending standards have been fairly solid over the last decade, and most of these homeowners have equity in their homes - and they will be able to restructure their loans once they are employed.

Note: Freddie Mac reported earlier.
Read the whole story
22 days ago
Share this story

Leanpath is now a Certified B Corporation

1 Share

We’re proud to announce that Leanpath is now a Certified B Corporation® - joining only 3,400 other companies around the globe that are committed to balancing purpose and profit.

Read the whole story
33 days ago
Share this story

I Love MDN, or the cult of the free in action

1 Share

Yesterday or so a new initiative I Love MDN was unveiled. People can show their appreciation for the MDN staff and volunteers by leaving a comment.

I have a difficult message about this initiative. For almost a day I’ve been trying to find a way to bring that message across in an understanding, uplifting sort of way, but I failed.

Before I continue I’d like to remind you that I ran the precursor to MDN, all by myself, for 15 years, mostly for free. I was a community volunteer. I know exactly what goes into that, and what you get back from it. I also burned out on it, and that probably colours my judgement.

So here is my message, warts and all.

I find I Love MDN demeaning to technical writers. It reminds me of breaking into spontaneous applause for our courageous health workers instead of funding them properly so they can do their jobs.

It pretends techincal writing is something that can be done by 'the community', ie. random people, instead of being a job that requires very specialised skills. If you deny these skills exist by pretending anyone can do it, you’re demeaning the people who have actually taken the time and trouble to build up those skills.

In addition, I see the I Love MDN initiative as an example of the cult of the free, of everything that’s wrong with the web development community today. The co-signers unthinkingly assume they are entitled to free content.

Unthinking is the keyword here. I do not doubt that the intentions of the organisers and co-signers are good, and that they did not mean to bring across any of the nasty things I said above and will say below. They just want to show MDN contributors that their work is being valued.

Thatr’s nice. But it’s not enough. Far from it.

Take a look here. It is my old browser compatibility site after four to six years of lying fallow. Would you use this as a resource for your daily work? There are still some useful bits, but it’s clear that the majority of these pages are sadly outdated.

That will be MDN’s fate under a volunteer-only regime.

What we need is money to retain a few core technical writers permanently. I Love MDN ignores that angle completely.

Did you sign I Love MDN? Great! Are you willing to pay 50-100 euros/dollars per year to keep MDN afloat? If not, this is all about making you feel better, not the technical writers. You’re part of the problem, not the solution.

Here’s our life blood — for free

MDN Web Docs is the life blood, the home, the source of truth for millions of web developers everyday. [...] As a community of developers we have access to all of this information for free ♥️

That’s not wonderful. It’s terrifying.

We get everything for free hurray hurray, also, too, community community community, and, hey! with that statement out of the way we’re done. Now let’s congratulate ourselves with our profound profundity and dance the glad dance of joy. Unicorn-shitting rainbows will be ours forever!

I Love MDN hinges on the expectation on the part of web developers that this sort of information ought to come for free — the expectation we’re entitled to this sort of free ride.

(That’s also the reason I never contributed to MDN. I feel I’ve done my duty, and although I don’t mind writing a few more articles I very much mind doing it for free.)

This is all made possible by a passionate community, inspirational technical writers, and a small, but determined team of developers.

Hogwash. The passionate community has nothing to do with anything, unless they’re willing to pay. A profoundly unscientific poll indicates that only about two-thirds of my responding followers are willing to do so. The rest, apparently, is too passionate to pay. It’s just along for the free ride. That isn’t very comforting.

Working in the long run

Rachel Andrew puts it better than I can:

The number of people who have told me that MDN is a wiki, therefore the community will keep it up to date tells me two things. People do not get the value of professional tech writers. Folk are incredibly optimistic about what "the community" will do for free.

So you once wrote an MDN page. Great! Thanks!

But will you do the boring but necessary browser testing to figure out if what you’re describing is always true, or just most of the time? And will you repeat that testing once new versions have come out? Will you go through related pages and update any references that need to be updated? Will you follow advances in what you descrived and update the page? If someone points out an error six months from now, will you return to the page to revise it and do the necessary research?

If the answer to any of these questions is No you did a quarter of your job and then walked away. Not very useful.

And if the answer to all of these questions is Yes, hey, great, you’ve got what it takes! You’re really into technical writing! We need you! Now, quick, tell me, how long will you keep it up without any form of payment? Quite a while, you say? Great! Try beating my record of 15 years.

The problem with expecting volunteers to do this sort of work is that they burn out. Been there, done that. And what happens when all volunteers burn out?

Yes, new volunteers will likely step up. But they have to be introduced to the documentation system, not only the techincal bits, but also the editorial requirements. Their first contributions will have to be checked for factual errors and stylistic problems, for proper linking to related pages, for enough browser compatibility information. Who’s going to do that? Also volunteers? But they just burned out.

It doesn’t work in the long run.


What ought to happen is MDN (or its successor) securing the funding to retain a few core technical writers on a permanent basis. Without that, it’s doomed to fail.

Now there are two ways of securing funding. The first one is appealing to big companies, particularly browser vendors. I can see Google, Microsoft, and Samsung chipping in a bit, maybe even quite a lot, to keep MDN running. (Apple won’t, of course. They’re on their own cloud.) This could work, especially in the short run.

But will we be well served by that in the long run? You might have noticed that all companies I named use a Chromium-based browser. What about Firefox? Or WebKit?

I have no doubt that the Chrome, Edge, and Samsung Internet developer relations teams are totally serious about keeping other browsers on board and will not bend MDN new-style to their own browsers in any way. They’ve shown their commitment to browser diversity time and again.

What I doubt is that the final decision rests with them. Once MDN new-style funded by the browser vendors has been running for a while, managers will poke their heads around the corner to ask what we, as in Google, Microsoft, or Samsung, get in return for all the money we’re spending. More attention for our browser, that’s what. Make it so!

That’s why I prefer the second option in the long run: funding by the web community itself. Create an independent entity like Fronteers, but then international, get members to pay 50-100 euros/dollars per year, and use that money to fund MDN or its successor.

Now this is a lot of work. But I still feel it needs to be done.

But who will do it? Volunteers? We’ll run into the same problem that I sketched above, just one step removed. I briefly considered starting such an initiative myself, but I found that I am unwilling to do it for free.

And I know exactly what it takes. I founded Fronteers for free, and it took me half a year of mind-numbing work, including fending off random idiots community members who also had an opinion. Even though others stepped up and helped, my first burn-out was mostly caused by Fronteers’s founding, and I am unwilling to do it all over again for free.

So there we are. On balance, it’s more likely we go with the big-company solution that will work in the short run but will give problems in the long run.

Unless the web development community stops expecting a free ride, and starts to pay up. Initiatives such as I Love MDN don’t give me a lot of hope, though.

Read the whole story
35 days ago
Share this story

IPv4, IPv6, and a sudden change in attitude

1 Share

A few years ago I wrote The World in Which IPv6 was a Good Design. I'm still proud of that article, but I thought I should update it a bit.

No, I'm not switching sides. IPv6 is just as far away from universal adoption, or being a "good design" for our world, as it was three years ago. But since then I co-founded a company that turned out to be accidentally based on the principles I outlined in that article. Or rather, from turning those principles upside-down.

In that article, I explored the overall history of networking and the considerations that led to IPv6. I'm not going to cover that ground again. Instead, I want to talk about attitude.

Internets, Interoperability, and Postel's Law

Did you ever wonder why "Internet" is capitalized?

When I first joined the Internet in the 1990s, I found some now-long-lost introductory tutorial. It talked about the difference between an internet (lowercase i) and the Internet (capital I). An internet is "any network that connects smaller networks together." The Internet is... well... it turns out that you don't need more than one internet. If you have two internets, it is nearly unavoidable that someone will soon figure out how to connect them together. All you need is one person to build that one link, and your two internets become one. By induction then, the Internet is the end result when you make it easy enough for a single motivated individual to join one internet to another, however badly.

Internets are fundamentally sloppy. No matter how many committees you might form, ultimately connections are made by individuals plugging things together. Those things might follow the specs, or not. They might follow those specs well, or badly. They might violate the specs because everybody else is also violating the specs and that's the only way to make anything work. The connections themselves might be fast or slow, or flakey, or only functional for a few minutes each day, or subject to amateur radio regulations, or worse. The endpoints might be high-powered servers, vending machines, toasters, or satellites, running any imaginable operating system. Only one thing's for sure: they all have bugs.

Which brings us to Postel's Law, which I always bring up when I write about networks. When I do, invariably there's a slew of responses trying to debate whether Postel's Law is "right," or "a good idea," as if it were just an idea and not a force of nature.

Postel's Law says simply this: be conservative in what you send, and liberal in what you accept. Try your best to correctly handle the bugs produced by the other end. The most successful network node is one that plans for every "impossible" corruption there might be in the input and does something sensible when it happens. (Sometimes, yes, "something sensible" is to throw an error.)

[Side note: Postel's Law doesn't apply in every situation. You probably don't want your compiler to auto-fix your syntax errors, unless your compiler is javascript or HTML, which, kidding aside, actually were designed to do this sort of auto-correction for Postel's Law reasons. But the law does apply in virtually every complex situation where you need to communicate effectively, including human conversations. The way I like to say it is, "It takes two to miscommunicate." A great listener, or a skilled speaker, can resolve a lot of conflicts.]

Postel's Law is the principle the Internet is based on. Not because Jon Postel was such a great salesperson and talked everyone into it, but because that is the only winning evolutionary strategy when internets are competing. Nature doesn't care what you think about Postel's Law, because the only Internet that happens will be the one that follows Postel's Law. Every other internet will, without exception, eventually be joined to The Internet by some goofball who does it wrong, but just well enough that it adds value, so that eventually nobody will be willing to break the connection. And then to maintain that connection will require further application of Postel's Law.

IPv6: a different attitude

If you've followed my writing, you might have seen me refer to IPv6 as "a second internet that not everyone is connected to." There's a lot wrapped up in that claim. Let's back up a bit.

In The World in Which IPv6 was a Good Design, I talked about the lofty design goals leading to IPv6: eliminate bus networks, get rid of MAC addresses, no more switches and hubs, no NATs, and so on. What I didn't realize at the time, which I now think is essential, is that these goals were a fundamental attitude shift compared to what went into IPv4 (and the earlier protocols that led to v4).

IPv4 evolved as a pragmatic way to build an internet out of a bunch of networks and machines that existed already. Postel's Law says you'd best deal with reality as it is, not as you wish it were, and so they did. When something didn't connect, someone hacked on it until it worked. Sloppy. Fits and starts, twine and duct tape. But most importantly, nobody really thought this whole mess would work as well as it turned out to work, or last as long as it turned out to last. Nobody knew, at the time, that whenever you start building internets, they always lead inexorably to The Internet.

These (mostly) same people, when they started to realize the monster they had created, got worried. They realized that 32-bit addresses, which they had originally thought would easily last for the lifetime of their little internet, were not even enough for one address per person in the world. They found out, not really to anyone's surprise, that Postel's Law, unyielding as it may be, is absolutely a maintenance nightmare. They thought they'd better hurry up and fix it all, before this very popular Internet they had created, which had become a valuable, global, essential service, suddenly came crashing down and it would all be their fault.

[Spoiler: it never did come crashing down. Well, not permanently. There were and are still short-lived flare-ups every now and then, but a few dedicated souls hack it back together, and so it goes.]

IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn't make all the same mistakes. It would have fewer hacks. And we'd upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days.

We can hardly blame people for believing this would work. Even the term "Second System Effect" was only about 20 years old at the time, and not universally known. Every previous Internet upgrade had gone fine. Nobody had built such a big internet before, with so much Postel's Law, with such a variety of users, vendors, and systems, so nobody knew it would be different.

Well, here we are 25 years later, and not much has changed. If we were feeling snarky, we could perhaps describe IPv6 as "the String Theory of networking": a decades-long boondoggle that attracts True Believers, gets you flamed intensely if you question the doctrine, and which is notable mainly for how much progress it has held back.

Luckily we are not feeling snarky.

Two Internets?

There are, of course, still no exceptions to the rule that if you build any internet, it will inevitably (and usually quickly) become connected to The Internet.

I wasn't sitting there when it happened, but it's likely the very first IPv6 node ran on a machine that was also connected to IPv4, if only so someone could telnet to it for debugging. Today, even "pure IPv6" nodes are almost certainly connected to a network that, if configured correctly, can find a way to any IPv4 node, and vice versa. It might not be pretty, it might involve a lot of proxies, NATs, bridges, and firewalls. But it's all connected.

In that sense, there is still just one Internet. It's the big one. Since day 1, The Internet has never spoken just one protocol; it has always been a hairy mess of routers, bridges, and gateways, running many protocols at many layers. IPv6 is one of them.

What makes IPv6 special is that its proponents are not content for it to be an internet that connects to The Internet. No! It's the chosen one. Its destiny is to be The Internet. As a result, we don't only have bridges and gateways to join the IPv6 internets and the IPv4 internet (although we do).

Instead, IPv6 wants to eventually run directly on every node. End users have been, uh, rather unwilling to give up IPv4, so for now, every node has that too. As a result, machines are often joined directly to what I call "two competing internets" --- the IPv4 one and the IPv6 one.

Okay, at this point our terminology has become very confusing. Sorry. But all this leads to the question I know you want me to answer: Which internet is better!?


I'll get to that, but first we need to revisit what I bravely called Avery's Laws of Wifi Reliability, which are not laws, were surely invented by someone else (since they're mostly a paraphrasing of a trivial subset of CAP theorem), and as it turns out, apply to more than just wifi. Oops. I guess the name is wrong in almost every possible way. Still, they're pretty good guidelines.

Let's refresh:

  • Rule #1: if you have two wifi router brands that work with 90% of client devices, and your device has a problem with one of them, replacing the wifi router brand will fix the problem 90% of the time. Thus, an ISP offering both wifi routers has a [1 - (10% x 10%)] = 99% chance of eventual success.

  • Rule #2: if you're running two wifi routers at once (say, a primary router and an extender), and both of them work "correctly" for about 90% of the time each day, the chance that your network has no problems all day is 81%.

In Rule #1, which I call "a OR b", success compounds and failure rates drop.

In Rule #2, which I call "a AND b", failure compounds and success drops.

But wait, didn't we add redundancy in both cases?

Depending how many distributed systems you've had to build, this is either really obvious or really mind blowing. Why did the success rate jump to 99% in the first scenario but drop to 81% in the second? What's the difference? And... which one of those cases is like IPv6?


Or we can ask that question another way. Why are there so many web pages that advise you to solve your connectivity problem by disabling IPv6?

Because automatic failover is a very hard problem.

Let's keep things simple. IPv4 is one way to connect client A to server X, and IPv6 is a second way. It's similar to buying redundant home IPv4 connections from, say, a cable and a DSL provider and plugging them into the same computer. Either way, you have two independent connections to The Internet.

When you have two connections, you must choose between them. Here are some factors you can consider:

  • Which one even offers a path from A to X? (If X doesn't have an IPv6 address, for example, then IPv6 won't be an option.)

  • Which one gives the shortest paths from A to X and from X to A? (You could evaluate this using hopcount or latency, for example, like in my old netselect program.)

  • Which path has the most bandwidth?

  • Which path is most expensive?

  • Which path is most congested right now?

  • Which path drops out least often? (A rebooted NAT will drop a TCP connection on IPv4. But IPv6 routes change more frequently.)

  • Which one has buggy firewalls or NATs in the way? Do they completely block it (easy) or just act strangely (hard)?

  • Which one blocks certain UDP or TCP ports, intentionally or unintentionally?

  • Which one is misconfigured to block certain ICMP packets so that PMTU discovery (always or sometimes) doesn't work with some or all hosts?

  • Which one blocks certain kinds of packet fragmentation?

A common heuristic called "Happy Eyeballs" is one way to choose between routes, but it covers only a few of those criteria.

The truth is, it's extremely hard to answer all those questions, and even if you can, the answers are different for every combination of A and X, and they change over time. Operating systems, web browsers, and apps, even if they implement Happy Eyeballs or something equivalent, tend to be pretty bad at detecting all these edge cases. And every app has to do it separately!

My claim is that the "choose between two internets" problem is the same as the "choose between two flakey wifi routers on the same SSID" problem (Rule #2). All is well as long as both internets (or both wifi routers) are working perfectly. As soon as one is acting weird, your overall results are going to be weird.

...and the Internet always acts weird, because of the tyranny of Postel's Law. Debugging the Internet is a full time job.

...and now there are two internets, with a surprisingly low level of overlap, so your ISP has to build and debug both.

...and every OS vendor has to debug both protocol implementations, which is more than twice as much code.

...and every app vendor has to test with both IPv4 and IPv6, which of course they don't.

We should not be surprised that the combined system is less reliable.

The dream

IPv6 proponents know all this, whether rationally or intuitively or at least empirically. The failure rate of two wonky internets joined together is higher than the failure rate of either wonky internet alone.

This leads them to the same conclusion you've heard so many times: we should just kill one of the internets, so we can spend our time making the one remaining internet less wonky, instead of dividing our effort between the two. Oh, and, obviously the one we kill will be IPv4, thanks.

They're not wrong! It would be a lot easier to debug with just one internet, and you know, if we all had to agree on one, IPv6 is probably the better choice.

But... we don't all have to agree on one, because of the awesome unstoppable terribleness that is Postel's Law. Nobody can declare one internet or the other to be officially dead, because the only thing we know for sure about internets is that they always combine to make The Internet. Someone might try to unplug IPv4 or IPv6, but some other jerk will plug it right back in.

Purity cannot ever be achieved at this kind of scale. If you need purity for your network to be reliable, then you have an unsolvable problem.

The workaround

One thing we can do, though, is build better heuristics.

Ok, actually we have to do better than that, because it turns out that correctly choosing between the two internets for each connection, at the start of that connection, is not possible or good enough. Problems like PMTU, fragmentation, NAT resets, and routing changes can interrupt a connection partway through and cause poor performance or dropouts.

I want to go back to a side note I left near the end of The World in Which IPv6 was a Good Design: mobile IP. That is, the ability for your connections to keep going even if you hop between IP addresses. If you had IP mobility, then you could migrate connections between your two internets in real time, based on live quality feedback. You could send the same packets over both links and see which ones work better. If you picked one link and it suddenly stopped, you could retransmit packets on the other link and pick up where you left off. Your precise heuristic wouldn't even matter that much, as long as it tries both ways eventually.

If you had IP mobility, then you could convert the "a AND b" scenario (failure compounds) into the "a OR b" scenario (success compounds).

And you know what, forget about IPv4 and IPv6. The same tricks would work with that redundant cable + DSL setup we mentioned above. Or a phone with both wifi and LTE. Or, given a fancy enough wifi client chipset, smoothly switching between multiple unrelated wifi routers.

This is what we do, in a small way, with Tailscale's VPN connections. We try all your Internet links, IPv4 and IPv6, UDP and TCP, relayed and peer-to-peer. We made mobile IP a real thing, if only on your private network for now. And what do you know, the math works. Tailscale with two networks is more reliable than Tailscale with one network.

Now, can it work for the whole Internet?

This article was originally posted to the Tailscale blog

Read the whole story
56 days ago
Share this story

On Liberating My Smartwatch From Cloud Services

1 Comment and 3 Shares

I’ve often said that if we convince ourselves that technology is magic, we risk becoming hostages to it. Just recently, I had a brush with this fate, but happily, I was saved by open source.

At the time of writing, Garmin is suffering from a massive ransomware attack. I also happen to be a user of the Garmin Instinct watch. I’m very happy with it, and in many ways, it’s magical how much capability is packed into such a tiny package.

I also happen to have a hobby of paddling the outrigger canoe:

I consider the GPS watch to be an indispensable piece of safety gear, especially for the boat’s steer, because it’s hard to judge your water speed when you’re more than a few hundred meters from land. If you get stuck in a bad current, without situational awareness you could end up swept out to sea or worse.

The water currents around Singapore can be extreme. When the tides change, the South China Sea eventually finds its way to the Andaman Sea through the Singapore Strait, causing treacherous flows of current that shift over time. Thus, after every paddle, I upload my GPS data to the Garmin Connect cloud and review the route, in part to note dangerous changes in the ebb-and-flow patterns of currents.

While it’s a clear and present privacy risk to upload such data to the Garmin cloud, we’re all familiar with the trade-off: there’s only 24 hours in the day to worry about things, and the service just worked so well.

Until yesterday.

We had just wrapped up a paddle with particularly unusual currents, and my paddling partner wanted to know our speeds at a few of the tricky spots. I went to retrieve the data and…well, I found out that Garmin was under attack.

Garmin was being held hostage, and transitively, so was access to my paddling data: a small facet of my life had become a hostage to technology.

A bunch of my paddling friends recommended I try Strava. The good news is Garmin allows data files to be retrieved off of the Instinct watch, for upload to third-party services. All you have to do is plug the watch into a regular USB port, and it shows up as a mass storage device.

The bad news is as I tried to create an account on Strava, all sorts of warning bells went off. The website is full of dark patterns, and when I clicked to deny Strava access to my health-related data, I was met with this tricky series dialog boxes:

Click “Decline”…

Click “Deny Permission”…

Click “OK”…

Three clicks to opt out, and if I wasn’t paying attention and just kept clicking the bottom box, I would have opted-in by accident. After this, I was greeted by a creepy list of people to follow (how do they know so much about me from just an email?), and then there’s a tricky dialog box that, if answered incorrectly, routes you to a spot to enter credit card information as part of your “free trial”.

Since Garmin at least made money by selling me a $200+ piece of hardware, collecting my health data is just icing on the cake; for Strava, my health data is the cake. It’s pretty clear to me that Strava made a pitch to its investors that they’ll make fat returns by monetizing my private data, including my health information.

This is a hard no for me. Instead of liberating myself from a hostage situation, going from Garmin to Strava would be like stepping out of the frying pan and directly into the fire.

So, even though this was a busy afternoon … I’m scheduled to paddle again the day after tomorrow, and it would be great to have my boat speed analytics before then. Plus, I was sufficiently miffed by the Strava experience that I couldn’t help but start searching around to see if I couldn’t cobble together my own privacy-protecting alternative.

I was very pleased to discovered an open-source utility called gpsbabel (thank you gpsbabel! I donated!) that can unpack Garmin’s semi-(?)proprietary “.FIT” file format into the interoperable “.GPX” format. From there, I was able to cobble together bits and pieces of XML parsing code and merge it with OpenStreetMaps via the Folium API to create custom maps of my data.

Even with getting “lost” on a detour of trying to use the Google Maps API that left an awful “for development only” watermark on all my map tiles, this only took an evening — it wasn’t the best possible use of my time all things considered, but it was mostly a matter of finding the right open-source pieces and gluing them together with Python (fwiw, Python is a great glue, but a terrible structural material. Do not build skyscrapers out of Python). The code quality is pretty crap, but Python allows that, and it gets the job done. Given those caveats, one could use it as a starting point for something better.

Now that I have full control over my data, I’m able to visualize it in ways that make sense to me. For example, I’ve plotted my speed as a heat map map over the course, with circles proportional to the speed at that moment, and a hover-text that shows my instantaneous speed and heart rate:

It’s exactly the data I need, in the format that I want; no more, and no less. Plus, the output is a single html file that I can share directly with nothing more than a simple link. No analytics, no cookies. Just the data I’ve chosen to share with you.

Here’s a snippet of the code that I use to plot the map data:

Like I said, not the best quality code, but it works, and it was quick to write.

Even better yet, I’m no longer uploading my position or fitness data to the cloud — there is a certain intangible satisfaction in “going dark” for yet another surveillance leakage point in my life, without any compromise in quality or convenience.

It’s also an interesting meta-story about how healthy and vibrant the open-source ecosystem is today. When the Garmin cloud fell, I was able to replace the most important functions of it in just an afternoon by cutting and pasting together various open source frameworks.

The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services.

Edits: added details on how to take data off the watch, and noted the watch’s price.

Read the whole story
58 days ago
Share this story
1 public comment
60 days ago
"The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services"
Earth, Sol system, Western spiral arm

Trace Together Token: Teardown and Design Overview

1 Share

On 19 June, GovTech Singapore invited four members of the community to come and inspect their new TraceTogether Token. This token removes the need to carry a phone at all times, and is designed to help both those who do not have a smart device capable of running TraceTogether well, including those using older Android devices, non-smartphones, and iOS users. I was among the group, which also consisted of Roland Turner, Harish Pillay, and Andrew "bunnie" Huang, who were given the opportunity to see the first public revision of hardware. In this post I will discuss the goal of the token, give some overview of the hardware, compare it with the app version of TraceTogether, and comment on the protocol changes.

Goal of the TraceTogether Token

The Trace Together Token is a dedicated hardware device that makes it easier to inform people if they may have come in prolonged contact with a person who subsequently was diagnosed with COVID-19. This is its sole purpose.

It is a hardware implementation of the app that GovTech previously developed, and has been installed over half a million times. The TraceTogether Token builds on the app and simplifies its usage: Throw it in a handbag or attach it to a keychain and forget about it while it does its thing.

Comparison With Phone Apps

I won't do an in-depth analysis of the Trace Together app. You can read an independent analysis that Frank Liauw put together to learn more. I worry more about the amount of spying that other popular apps on my phone do. For example, every banking, taxi booking app, and food delivery service on my phone has uploaded some amount of data to Facebook, Google, and a company called AppFlyer.

TraceTogether uses a protocol known as BlueTrace, and there are several problems with the protocol that make it challenging to work with.

First, antenna designs vary. As part of the Bluetooth spec, devices can report the amount of power they are currently using to broadcast:

From the Supplement to Bluetooth Core Specification, Part A

BlueTrace includes this information in the advertising beacon, but what does the number mean? Intuitively it will tell you how far away a device is, because you can correlate the strength of the received signal with the broadcast power: If they said they were loudly broadcasting but you received a weak signal, they must be far away. However, antenna designs vary, and just like with humans, one phone's "loud" is another phone's "whisper".

There's also the issue of charging. Phones must be charged at least daily. With mobile payment becoming more common, people are incentivized to keep their devices charged and running, however it's still very common to be hobbled by a phone simply running out of battery.

On Android, many device manufacturers are very aggressive when it comes to terminating background processes. TraceTogether necessarily must always run in the background, but your phone might not realize that and could terminate the process anyway. Anyone who has found themselves unable to receive WhatsApp messages without opening up the app will have experienced something similar. Additionally, many older Android devices are not compatible with the approach of TraceTogether, where a device can act as a Bluetooth Central (i.e. "host") or a Peripheral (i.e. "device").

As an aside, it was refreshing to hear Minister Vivian Balakrishnan using the words "Peripheral" and "Central" when discussing issues pertaining to older devices. These are the technical terms for the roles Bluetooth devices play, and he correctly pointed out that devices that do not support Bluetooth Low Energy (BLE) are incompatible with Bluetooth-based contact tracing.

Then there's the iOS problem. Apple does not let apps use Bluetooth in the background, so users must always run TraceTogether. The easy answer here is to use the Apple-sanctioned tracing protocol, however this protocol is not compatible with BlueTrace and makes very different assumptions about how contact tracing should work. It also assumes that everyone has a modern device, which leaves out a significant portion of the population that does not have the latest hardware.

Hardware Overview

The hardware isn't ready yet, and we aren't yet allowed to share photos of the device because supplier contracts are still being worked out. Still, we could identify the major components well enough that I put together this block diagram:

Block Diagram of the Trace Together Token

There are several interesting parts to note about this diagram:

  • There is no battery charger – it's designed to run for several months on a single battery.
  • There is a realtime clock with its own battery meaning time must be important in the new protocol
  • They use a powered antenna to improve performance
  • The entire system must be extremely low power

The last point means it is unlikely that they hid a GPS tracker, WiFi radio, or cellular modem in this device. The battery is a small coin cell, which would only last a few hours if it were receiving GPS or communicating via WiFi. Additionally, there are no additional sensors such as an accelerometer, pressure sensor, or microphone.

All of the major ICs had obfuscated markings so we couldn't identify part numbers. However, the block diagram I sketched above happens to look very similar to the block diagram for Simmel, which is an open-source tracking token put together by Bunnie and myself:

Block diagram of Simmel

There are a few notable differences:

  • Simmel uses a PCB antenna because it was a proof-of-concept device, so we were willing to accept reduced range.
  • There was an experiment in using Near Ultrasound as an alternative to Bluetooth for contact tracing, under the theory that it would be lower power. Ultrasound turns out to be too directional, which is why bats can use it for echolocation.
  • TraceTogether uses a separate realtime clock, whereas Simmel currently relies on a stupendous hack to save time in case of a crash, meaning it is less accurate.
  • Simmel runs CircutPython in an effort to make early development easier, with the intention of rewriting it in a lower level language later on. The TraceTogether Token presumably runs its own stack.

Despite these minor differences, there are many similarities:

  • Both run on a non-rechargeable battery
  • Both use a voltage regulator to stabilize the battery voltage
  • Both have an external flash memory for storing contacts
  • Both are designed to last for months without user interaction, and be forgotten about until they are needed
  • Both rely on Bluetooth – specifically Bluetooth LE – for interaction
  • Neither contains any additional sensors, since those would add cost and power consumption

Therefore, while the exact hardware details of the Trace Together Token are still obfuscated, I can safely say that it is conceptually extremely similar to Simmel. Any questions you have about the hardware approach GovTech is taking can be answered by looking at the Simmel hardware repository.

BlueTrace Protocol Changes

One of the challenges we ran into when developing Simmel was the power budget. The nRF52833 part we used requires a lot of power to listen to Bluetooth:

The nRF52833 reference manual on power consumption (Receiving)

BlueTrace advises that a device listen 20% of the time, which means it's receiving 20% of the time. In testing, we were observing about 5.9 mA current draw when receiving data, compared to 0.012 mA when idle. Furthermore, BlueTrace recommends that a device must transmit about 90% of the time, which is unfortunate because transmission has similar numbers:

The nRF52833 reference manual on power consumption (Transmitting)

For a mobile phone these numbers are tiny, but when running off of a pair of AAA batteries, a continuous drain of 2 mA means a 1200 mAh AAA battery will be drained in 1200 mAh / 24 hours / 2 mA = 25 days.

Part of the problem involves the BlueTrace approach, which follows a traditional Peripheral/Central approach: A Central advertises a GATT Service of "BlueTrace", allowing another device to connect and read the current temporary token / deposit its own temporary token. This is required in order to make a two-way connection in BLE, so it is an extremely common approach. Additionally, ensuring the connection is two-way also ensures that the devices are close enough to matter from an epidemiological perspective.

GovTech must have also run into these issues, because the BlueTrace protocol is being modified for use with the Trace Together Token. Instead of forming a two-way connection, devices now simply broadcast their temporary tokens. The interval for broadcast is much longer, and the scanning interval is much shorter, meaning the device can spend most of its time in a low-power suspend state.


Simmel has 2 MB of flash memory. Each BlueTrace Temporary Token payload is 160 bytes. That means Simmel can store (2^21) bytes of flash / 160 bytes per payload / 21 days =  624 records per day. We weren't sure how many daily records we could expect, and the number surely was very low during the Circuit Breaker. GovTech must have decided that this was too large to store on an embedded device, so for the Trace Together Token they reduced the size of the critical part of the payload. In the new protocol, this payload is much smaller, although we don't yet have exact numbers.

Finally, there's the issue of temporary tokens. In the current Trace Together app, a batch of tokens is downloaded from the Ministry of Health every few days. With Simmel, we assumed we could simply download a few months' worth of tokens and store them on the flash memory, consuming them as time passed. GovTech took a different approach with the Trace Together Token, and instead derive the temporary token from a unique ID hashed together with the current time. This approach is similar to how the European DP3T protocol works.

As a result of all of these changes, many of the challenges we faced when designing Simmel are avoided with the Trace Together Token: They can reduce power by spending less time transmitting and receiving; they don't have to use so much storage to keep track of interactions; and they don't need to store several months' worth of temporary tokens. Overall these changes are exactly what are needed for an implementation of the protocol for use in a hardware token.


The approach taken by GovTech when designing the Trace Together Token hardware is sound. The device accomplishes the goals it set out to, while preserving the privacy of the owner. Like the Trace Together app, the Trace Together Token cannot be used to identify the owner merely by looking at Bluetooth broadcasts – the only entity that can correlate logged data to a human is the Ministry of Health.

During this session we didn't have access to the software, because as a tracing beacon there's not much of a user interface beyond a blinking LED. Also, they weren't ready to let us attach debug probes, meaning we can't draw any conclusions about the software itself. However, given the PCB design and the system's power requirements there isn't much they could hide.

Overall I'm pleased with the direction they are going in with the Trace Together Token, and look forward to getting one of my very own.

Read the whole story
91 days ago
Share this story
Next Page of Stories