Doing random things over at http://musteat.org
196 stories
·
4 followers

Photo

2 Shares












Read the whole story
smarkwell
136 days ago
reply
PDX
Share this story
Delete

Edge Computing at Chick-fil-A

1 Share

by Brian Chambers, Caleb Hurd, Sean Drucker, Alex Crane, Morgan McEntire, Jamie Roberts, and Laura Jauch (the Chick-fil-A IOT/Edge team)

In a recent post, we shared about how we do bare metal clustering for Kubernetes on-the-fly at the Edge in our restaurants. One of the most common (and best) questions we got was “but why?”. In the post we will answer that question. If you are interested in more or prefer video to text, you can also check out Brian and Caleb’s presentation at QConNY 2018.

(Including Kubernetes. Look for our new container orchestration tool, Kowbernetes, coming soon)

Why K8s in a restaurant?

Why does a restaurant company like Chick-fil-A want to deploy Kubernetes at the Edge? What are the goals? That, we can answer.

  • Low-latency, internet-independent applications that can reliably run our business
  • High availability for these applications
  • Platform that enables rapid innovation and that allows delivery of business functionality to production as quickly as possible
  • Horizontal scale — infrastructure and application development teams

Capacity Pressures

Chick-fil-A is a very successful company in large part because of our fantastic food and customer service — both the result of the great people that operate and work in our restaurants. They are unmatched in the industry. The result is very busy restaurants. If you have been to a Chick-fil-A restaurant, this is not news to you. If you have not been, it’s time to go!

A busy Chick-fil-A in Manhattan w/ lines out the door. Photo credit Wall Street Journal

While we are very grateful for our successes, they have created a lot of capacity challenges for us as a business. To put it in perspective, we do more sales in six days a week than our competitors do in seven (we are closed on Sunday). Many of our restaurants are doing greater than three times the volume that they were initially designed for. These are great problems to have, but they create some extremely difficult challenges related to scale.

At Chick-fil-A, we believe the solutions to many of these capacity problems are technology solutions.

A new generation of workloads

One of our approaches to solving these problems is to invest in a smarter restaurant.

Our goal: simplify the restaurant experience for Owner/Operators and their teams and optimize for great, hot, tasty, food served quickly and with a personal touch, all while increasing the capacity of our existing footprint.

Our hypothesis: By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.

As a simple example, image a forecasting model that attempts to predict how many Waffle Fries (or replace with your favorite Chick-fil-A product) should be cooked over every minute of the day. The forecast is created by an analytics process running in the cloud that uses transaction-level sales data from many restaurants. This forecast can most certainly be produced with a little work. Unfortunately, it is not accurate enough to actually drive food production. Sales in Chick-fil-A restaurants are prone to many traffic spikes and are significantly effected by local events (traffic, sports, weather, etc.).

However, if we were to collect data from our point-of-sale system’s keystrokes in real-time to understand current demand, add data from the fryers about work in progress inventory, and then micro-adjust the initial forecast in-restaurant, we would be able to get a much more accurate picture of what we should cook at any given moment. This data can then be used to give a much more intelligent display to a restaurant team member that is responsible for cooking fries (for example), or perhaps to drive cooking automation in the future.

Goals like this led us to develop an Internet of Things (IOT) platform for our restaurants. To successfully scale our business we need the ability to 1) collect data and 2) use it to drive automation in the restaurant.

Rather than select many different vendor solutions that were silo’d and disconnected and therefore difficult to integrate, we wanted an open ecosystem that we own. Our open approach enables us to allow internal developers or external partners can develop connected “things” or applications for the Edge and leverage our platform for common needs (security, connectivity, etc.).

Put another way, the IOT/Edge team owns a small subset of the overall application base that is deployed on the edge, and is effectively a runtime platform team for the in-restaurant compute platform.

Architecture Overview

Here is a look at the architecture we developed (and are using today) to meet these goals.

Simple view of our current architecture, w/ K8s changes highlighted

Cloud Control Plane

Today, we run our higher order control plane and core IoT services in the cloud. This includes services like:

  • Authorization Server — device identity management
  • Data ingest / routing — collect data and send it where it needs to go
  • Operational time series data — logs, monitoring, alerting, tracing
  • Deployment management — self-service deployment for our application teams using GitOps (kudos to our friends at Weaveworks for sharing the term). We plan to share our work and code soon.

These services also run in Kubernetes so that we have a common paradigm across all of our team’s deployments.

Edge

“Edge Computing” is the idea of putting compute resources close to the action to attain a higher level of availability and/or lower level of latency.

We think of our Edge Computing environment as a “micro private cloud”. By this, we mean that we provide developers with a series of helpful services and a place to deploy their applications on our infrastructure.

Does this make us the largest “cloud provider” in the world? You can be the judge of that.

In all seriousness, this approach gives us a unique kind of scale. Rather than a few large K8s clusters with tens-to-hundreds-of-thousands of containers, at full scale we will have more than 2000 clusters with tens of containers per cluster. And that number grows significantly as we open many new restaurants every year.

Our edge workloads include:

  • platform services such as an authentication token service, pub/sub messaging (MQTT), log collection/exfiltration (FluentBit), monitoring (Prometheus), etc.
  • applications that interact with “things” in the restaurant
  • simple microservices that serve HTTP requests
  • machine learning models that synthesize cloud-developed forecasts with real-time events from MQTT to make decisions at the edge and drive automation

Our physical edge environment obtains it’s connectivity from multiple in-restaurant switches (and two routers), so we operate on a very highly available LAN.

Today, nearly all of the data that is collected at the Edge is ephemeral and only needs to exist for a short time before being exfiltrated to the cloud. This could change in the future, but it has kept our early-stage deployments very simple and made them much easier to manage.

Since we have a highly available Kubernetes cluster with data replicated across all nodes, we can ensure that we can retain any data that is collected until a time when the internet is again available for data exfiltration. We can also aggregate, compress, or drop data dynamically at the Edge based on business need.

Given all of this, we still use the cloud first for our services whenever possible. Edge is the fallback deployment target when high-availability, low-latency, in-restaurant applications are a must.

Following in the footsteps of giants

We run our Edge infrastructure on commodity hardware that costs us, ballpark, $1000/restaurant. We wanted to keep our hardware investment low-cost and horizontally scalable. We have not found a way to make it dynamic/elastic yet, but maybe one day (order on Amazon w/ free 30-second shipping perhaps?).

With our architecture, we can add additional compute capacity as-needed by adding additional devices. We may upsize our hardware footprint in the future, but what we have makes sense for our workloads today. Google also got started scaling workloads on commodity hardware (written in 2003, where has the time gone?)… so we’re in good company!

We hope our approach to Edge infrastructure encourages creative thinking for anyone that is working with scarce resources (physical space, budget, or otherwise). I think that’s everyone.

Everything else

Finally, we have our connectivity layer and the IOT things. Many of our devices are fully-powered (no batteries) but still constrained on chipsets/processing power. Wi-Fi is the default method for connectivity. We have not committed to a low-power protocol yet, but are interested in LoRa and the WiFi Halo (802.11ah) project.

For things/physical devices, our team also provides developers with an SDK for executing on-boarding flows (all OAuth based) and for accessing services within the Edge environment such as MQTT. Brian talked about this more extensively at QConNY 2017 (note that we were using Docker Swarm vs. Kubernetes at the time).

Containers at the Edge

Why would you want to run containers at the Edge? For the same reason you would run them in the cloud (or anywhere else).

Dependency management becomes easier. Testing is easier. The developer experience is easier and better. Teams can move much faster and more autonomously, especially when reasonable points of abstraction (such as k8s namespaces) and resource limits (CPU/RAM) are applied.

Another key design goal was reliability for our critical applications, meaning no single points-of-failure in the architecture. There are many ways this type of goal can be achieved, but we decided having > 1 physical hosts at the Edge and a container-based orchestration layer was the best approach for the types of workloads we have and for the skillset of our teams.

We initially called our strategy “Redundant Restaurant Compute”, which really speaks to the goal behind our approach. Later, we transitioned to calling it “Edge Computing”.

The ability to orchestrate services effectively and to preserve a desired number of replicas quickly and consistently attracted us to Kubernetes.

Why Kubernetes?

At first, we planned to use Docker Swarm for container orchestration, but made the move to Kubernetes in early 2018 because we believe it’s core capabilities and surrounding ecosystem are (by far) the strongest. We are excited about taking advantage of some of the developments around Kubernetes like Istio and OpenFaaS.

Most importantly, we firmly believe…

Ideas, in and of themselves, have no value.

Code, in and of itself, has no value.

The only thing that has value is code that is running in production. Only then have you created value for users and had the opportunity to impact your business.

Therefore, at Chick-fil-A, we want to optimize for taking great ideas, turning them into code, and getting that code running in production as fast as possible.

Research says that using the latest and greatest technology (like Kubernetes) has no correlation to a team or organization’s success. None. The ability to turn ideas into code and get code into production rapidly does.

If you can accomplish this goal and meet your requirements with VMWare, a single machine, some other clustering tool or orchestration layer, or using just the cloud, we would not try and convince you to switch to Kubernetes and follow our lead. The simplest possible solution is usually the best solution.

At Chick-fil-A, we believe that the best way for us to achieve these goals is for us to embrace containerization, and to do so with Kubernetes. At the end of the day, its all there to help you “Eat More Chicken”.

Next

What are our next steps?

In the next 18-24 months, we expect to increase the number of smart/connected devices in our restaurants to > 100,000 and complete our chain-wide rollout of Kubernetes to the Edge. The number of use cases for “brain applications” that control the restaurant from the edge continue to grow, and we look forward to serving them and sharing more with the community about what we learn.

We will also take a close look at the Google GKE On-Prem service that was just announced since it might save us from building and managing clusters in the future. What we have done to-date is a sunk cost, and if we can re-focus resources on the differentiating rather than infrastructure, you can be certain we will.

If we failed to answer questions you have about what we’re doing, don’t hesitate to reach out on LinkedIn. If we can help you with any challenges with Kubernetes at the Edge, please let us know. At Chick-fil-A, it is always our please to share, and (kudos to Caleb for this) “our pleasure to server you”.

Read the whole story
smarkwell
138 days ago
reply
PDX
Share this story
Delete

New US Tariffs are Anti-Maker and Will Encourage Offshoring

3 Comments and 5 Shares

The new 25% tariffs announced by the USTR, set to go into effect on July 6th, are decidedly anti-Maker and ironically pro-offshoring. I’ve examined the tariff lists (List 1 and List 2), and it taxes the import of basic components, tools and sub-assemblies, while giving fully assembled goods a free pass. The USTR’s press release is careful to mention that the tariffs “do not include goods commonly purchased by American consumers such as cellular telephones or televisions.”

Think about it – big companies with the resources to organize thousands of overseas workers making TVs and cell phones will have their outsourced supply chains protected, but small companies that still assemble valuable goods from basic parts inside the US are about to see significant cost increases. Worse yet educators, already forced to work with a shoe-string budget, are going to return from their summer recess to find that basic parts, tools and components for use in the classroom are now significantly more expensive.


Above: The Adafruit MetroX Classic Kit is representative of a typical electronics education kit. Items marked with an “X” in the above image are potentially impacted by the new USTR tariffs.

New Tariffs Reward Offshoring, Encourage IP Flight

Some of the most compelling jobs to bring back to the US are the so-called “last screw” system integration operations. These often involve the complex and precise process of integrating simple sub-assemblies into high-value goods such as 3D printers or cell phones. Quality control and IP protection are paramount. I often advise startups to consider putting their system integration operations in the US because difficult-to-protect intellectual property, such as firmware, never has to be exported if the firmware upload operation happens in the US. The ability to leverage China for low-value subassemblies opens more headroom to create high-value jobs in the US, improving the overall competitiveness of American companies.

Unfortunately, the structure of the new tariffs are exactly the opposite of what you would expect to bring those jobs back to the US. Stiff new taxes on simple components, sub-assemblies, and tools like soldering irons contrasted against a lack of taxation on finished goods pushes business owners to send these “last screw” operation overseas. Basically, with these new tariffs the more value-add sent outside the borders of the US, the more profitable a business will be. Not even concerns over IP security could overcome a 25% increase in base costs and keep operations in the US.

It seems the intention of the new tariff structure was to minimize the immediate pain that voters would feel in the upcoming mid-terms by waiving taxes on finished goods. Unfortunately, the reality is it gives small businesses that were once considering setting up shop in the US a solid reason to look off-shore, while rewarding large corporations for heavy investments in overseas operations.

New Tariffs Hurt Educators and Makers

Learning how to blink a light is the de-facto introduction to electronics. This project is often done with the help of a circuit board, such as a Microbit or Chibi Chip, and a type of light known as an LED. Unfortunately, both of those items – simple circuit boards and LEDs – are about to get 25% more expensive with the new tariffs, along with other Maker and educator staples such as capacitors, resistors, soldering irons, and oscilloscopes. The impact of this cost hike will be felt throughout the industry, but most sharply by educators, especially those serving under-funded school districts.


Above: Learning to blink a light is the de-facto introduction to electronics, and it typically involves a circuit board and an LED, like those pictured above.

Somewhere on the Pacific Ocean right now floats a container of goods for ed-tech startup Chibitronics. The goods are slated primarily for educators and Makers that are stocking up for the fall semester. It will arrive in the US the second week of July, and will likely be greeted by a heavy import tax. I know this because I’m directly involved in the startup’s operations. Chibitronics’ core mission is to serve the educator market, and as part of that we routinely offered deep discounts on bulk products for educators and school systems. Now, thanks to the new tariffs on the basic components that educators rely upon to teach electronics, we are less able to fulfill our mission.

A 25% jump in base costs forces us to choose between immediate price increases or cutting the salaries of our American employees who support the educators. These new tariffs are a tax on America’s future – it deprives some of the most vulnerable groups of access to technology education, making future American workers less competitive on the global stage.


Above: Educator-oriented learning kits like the Chibitronics “Love to Code” are slated for price increases this fall due to the new tariffs.

Protectionism is Bad for Technological Leadership

Recently, I was sent photos by Hernandi Krammes of a network card that was manufactured in Brazil around 1992. One of the most striking features of the card was how retro it looked – straight out of the 80’s, a full decade behind its time. This is a result of Brazil’s policy of protectionist tariffs on the import of high-tech components. While stiff tariffs on the import of microchips drove investment in local chip companies, trade barriers meant the local companies didn’t have to be as competitive. With less incentive to re-invest or upgrade, local technology fell behind the curve, leading ultimately to anachronisms like the Brazilian Ethernet card pictured below.


Above: this Brazilian network card from 1992 features design techniques from the early 80’s. It is large and clunky compared to contemporaneous cards.

Significantly, it’s not that the Brazilian engineers were any less clever than their Western counterparts: they displayed considerable ingenuity getting a network card to work at all using primarily domestically-produced components. The tragedy is instead of using their brainpower to create industry-leading technology, most of their effort went into playing catch-up with the rest of the world. By the time protectionist policies were repealed in Brazil, the local industry was too far behind to effectively compete on a global scale.

Should the US follow Brazil’s protectionist stance on trade, it’s conceivable that some day I might be remarking on the quaintness of American network cards compared to their more advanced Chinese or European counterparts. Trade barriers don’t make a country more competitive – in fact, quite the opposite. In a competition of ideas, you want to start with the best tech available anywhere; otherwise, you’re still jogging to the starting line while the competition has already finished their first lap.

Stand Up and Be Heard

There is a sliver of good news in all of this for American Makers. The list of commodities targeted in the trade war is not yet complete. The “List 2” items – which include all manner of microchips, motors, and plastics (such as 3D printer PLA filament and acrylic sheets for laser cutting) that are building blocks for small businesses and Makers – have yet to be ratified. The USTR website has indicated in the coming weeks they will disclose a process for public review and comment. Once this process is made transparent – whether you are a small business owner or the parent of a child with technical aspirations – I encourage you to please share your stories and concerns on how you will be negatively impacted by these additional tariffs.

Some of the List 2 items still under review include:

9030.31.00 Multimeters for measuring or checking electrical voltage, current, resistance or power, without a recording device
8541.10.00 Diodes, other than photosensitive or light-emitting diodes
8541.40.60 Diodes for semiconductor devices, other than light-emitting diodes, nesoi
8542.31.00 Electronic integrated circuits: processors and controllers
8542.32.00 Electronic integrated circuits: memories
8542.33.00 Electronic integrated circuits: amplifiers
8542.39.00 Electronic integrated circuits: other
8542.90.00 Parts of electronic integrated circuits and microassemblies
8501.10.20 Electric motors of an output of under 18.65 W, synchronous, valued not over $4 each
8501.10.60 Electric motors of an output of 18.65 W or more but not exceeding 37.5 W
8501.31.40 DC motors, nesoi, of an output exceeding 74.6 W but not exceeding 735 W
8544.49.10 Insulated electric conductors of a kind used for telecommunications, for a voltage not exceeding 80 V, not fitted with connectors
8544.49.20 Insulated electric conductors nesoi, for a voltage not exceeding 80 V, not fitted with connectors
3920.59.80 Plates, sheets, film, etc, noncellular, not reinforced, laminated, combined, of other acrylic polymers, nesoi
3916.90.30 Monafilament nesoi, of plastics, excluding ethylene, vinyl chloride and acrylic polymers

Here’s some of the “List 1” items that are set to become 25% more expensive to import from China, come July 6th:

Staples used by every Maker or electronics educator:

8515.11.00 Electric soldering irons and guns
8506.50.00 Lithium primary cells and primary batteries
8506.60.00 Air-zinc primary cells and primary batteries
9030.20.05 Oscilloscopes and oscillographs, specially designed for telecommunications
9030.33.34 Resistance measuring instruments
9030.33.38 Other instruments and apparatus, nesoi, for measuring or checking electrical voltage, current, resistance or power, without a recording device
9030.39.01 Instruments and apparatus, nesoi, for measuring or checking

Circuit assemblies (like Microbit, Chibi Chip, Arduino):

8543.90.68 Printed circuit assemblies of electrical machines and apparatus, having individual functions, nesoi
9030.90.68 Printed circuit assemblies, NESOI

Basic electronic components:

8532.21.00 Tantalum fixed capacitors
8532.22.00 Aluminum electrolytic fixed capacitors
8532.23.00 Ceramic dielectric fixed capacitors, single layer
8532.24.00 Ceramic dielectric fixed capacitors, multilayer
8532.25.00 Dielectric fixed capacitors of paper or plastics
8532.29.00 Fixed electrical capacitors, nesoi
8532.30.00 Variable or adjustable (pre-set) electrical capacitors
8532.90.00 Parts of electrical capacitors, fixed, variable or adjustable (pre-set)
8533.10.00 Electrical fixed carbon resistors, composition or film types
8533.21.00 Electrical fixed resistors, other than composition or film type carbon resistors, for a power handling capacity not exceeding 20 W
8533.29.00 Electrical fixed resistors, other than composition or film type carbon resistors, for a power handling capacity exceeding 20 W
8533.31.00 Electrical wirewound variable resistors, including rheostats and potentiometers, for a power handling capacity not exceeding 20 W
8533.40.40 Metal oxide resistors
8533.40.80 Electrical variable resistors, other than wirewound, including rheostats and potentiometers
8533.90.80 Other parts of electrical resistors, including rheostats and potentiometers, nesoi
8541.21.00 Transistors, other than photosensitive transistors, with a dissipation rating of less than 1 W
8541.29.00 Transistors, other than photosensitive transistors, with a dissipation rating of 1 W or more
8541.30.00 Thyristors, diacs and triacs, other than photosensitive devices
8541.40.20 Light-emitting diodes (LED’s)
8541.40.70 Photosensitive transistors
8541.40.80 Photosensitive semiconductor devices nesoi, optical coupled isolators
8541.40.95 Photosensitive semiconductor devices nesoi, other
8541.50.00 Semiconductor devices other than photosensitive semiconductor devices, nesoi
8541.60.00 Mounted piezoelectric crystals
8541.90.00 Parts of diodes, transistors, similar semiconductor devices, photosensitive semiconductor devices, LED’s and mounted piezoelectric crystals
8504.90.75 Printed circuit assemblies of electrical transformers, static converters and inductors, nesoi
8504.90.96 Parts (other than printed circuit assemblies) of electrical transformers, static converters and inductors
8536.50.90 Switches nesoi, for switching or making connections to or in electrical circuits, for a voltage not exceeding 1,000 V
8536.69.40 Connectors: coaxial, cylindrical multicontact, rack and panel, printed circuit, ribbon or flat cable, for a voltage not exceeding 1,000 V
8544.49.30 Insulated electric conductors nesoi, of copper, for a voltage not exceeding 1,000 V, not fitted with connectors
8544.49.90 Insulated electric conductors nesoi, not of copper, for a voltage not exceeding 1,000 V, not fitted with connectors
8544.60.20 Insulated electric conductors nesoi, for a voltage exceeding 1,000 V, fitted with connectors
8544.60.40 Insulated electric conductors nesoi, of copper, for a voltage exceeding 1,000 V, not fitted with connectors

Parts to fix your phone if it breaks:

8537.10.80 Touch screens without display capabilities for incorporation in apparatus having a display
9033.00.30 Touch screens without display capabilities for incorporation in apparatus having a display
9013.80.70 Liquid crystal and other optical flat panel displays other than for articles of heading 8528, nesoi
9033.00.20 LEDs for backlighting of LCDs
8504.90.65 Printed circuit assemblies of the goods of subheading 8504.40 or 8504.50 for telecommunication apparatus

Power supplies:

9032.89.60 Automatic regulating or controlling instruments and apparatus, nesoi
9032.90.21 Parts and accessories of automatic voltage and voltage-current regulators designed for use in a 6, 12, or 24 V system, nesoi
9032.90.41 Parts and accessories of automatic voltage and voltage-current regulators, not designed for use in a 6, 12, or 24 V system, nesoi
9032.90.61 Parts and accessories for automatic regulating or controlling instruments and apparatus, nesoi
8504.90.41 Parts of power supplies (other than printed circuit assemblies) for automatic data processing machines or units thereof of heading 8471
Read the whole story
smarkwell
177 days ago
reply
PDX
Share this story
Delete
3 public comments
jepler
179 days ago
reply
bunnie lays it out for you
Earth, Sol system, Western spiral arm
philipstorry
179 days ago
reply
Trade wars are never a good idea unless you have a monopoly.
Bunnie lays out the long-term implications for America's economy very effectively here.
It can't be the intended result that the big multinationals will potentially benefit by removing small competition. It can't be the intended result that this restricts access to educational opportunities.
Sadly, I doubt that those behind the trade war care about anything other than how tough they think they look...
London, United Kingdom
brennen
180 days ago
reply
This is pretty fucking relevant both to my interests and to my continued gainful employment.
Boulder, CO

How The New York Times Uses Software To Recognize Members of Congress

1 Share
Illustration by Greg Kletsel

Even if you’ve covered Congress for The New York Times for a decade, it can be hard to recognize which member you’ve just spoken with. There are 535 members, and with special elections every few months, members cycle in and out relatively frequently. So when former Congressional Correspondent Jennifer Steinhauer tweeted “Shazam, but for House members faces” in early 2017, The Times’s Interactive News team jumped on the idea.

Our first thought was: Nope, it’s too hard! Computer vision and face recognition are legitimately difficult computer science problems. Even a prototype would involve training a model on the faces of every member of Congress, and just getting the photographs to train with would be an undertaking.

But we did some Googling and found the Amazon Rekognition API. This service has a “RecognizeCelebrity” endpoint that happens to include every member of Congress as well as several members of the Executive branch. Problem solved! Now we’re just talking about knitting together a few APIs.

As we began working on this application, we understood that there are numerous, valid privacy concerns about using face recognition technology. And Interactive News, a programming team embedded in the newsroom, abides by all aspects of The Times’s ethical journalism handbook including how we gather information about the people we cover. In this case, we decided that the use of Rekognition’s celebrity endpoint meant we would only recognize congress people and other “celebrities” in Rekognition’s database.

By the end of the summer, Interactive News interns Gautam Hathi and Sherman Hewitt had built a prototype based on some conversations with me and my colleague Rachel Shorey. To use the prototype, a congressional reporter could snap a picture of a congress member, text it to a our app, and get back an annotated version of the photograph identifying any members and giving a confidence score.

However, we discovered a new round of difficulties. Rekognition incorrectly identified some members of Congress as similar-looking celebrities — like one particularly funny instance where it confused Bill Nelson with Bill Paxton. Additionally, our hit rate on photographs was very low because the halls of the Capitol are poorly lit and the photographs we took for testing were consistently marred by shadow and blur. Bad connectivity in the basement of the Capitol made sending and receiving an MMS slow and error-prone during our testing. And, of course, there were few places in the Capitol where we could really get the photographs we needed without committing a foul.

Gautam and Sherman got around the “wrong celebrity” problem with a novel approach: A hardcoded list of Congresspeople and their celebrity doppelgangers. We grew more confident taking photographs and only sent the ones where members were better lit.

A text-based interface is easiest for reporters to use, so while texting is slow, it’s superior to a web service in the low-bandwidth environment of the Capitol.

In addition to confirming the identity of a member, Who The Hill has helped The Times tell some stories we couldn’t have reported otherwise. Most recently, Rachel Shorey found members of Congress at an event hosted by a SuperPAC by trawling through images found on social media and finding matches.

If you’re interested in running your own version, the code for Who The Hill is open sourced under the Apache 2.0 license. The latest version includes a command-line interface in case you’d like to use the power of Amazon’s Rekognition to dig through a collection of photographs on your local machine without sending a pile of MMS messages.

Our service is far from infallible. But Times reporters like Thomas Kaplan love having a backup for when they can’t get a moment with a member to confirm their identity. “Of course,” says Kaplan, “the most reliable way to figure out a member’s identity is the old-fashioned way: Just ask them.”


How The New York Times Uses Software To Recognize Members of Congress was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
smarkwell
185 days ago
reply
PDX
Share this story
Delete

Developing Apps for Your TV the Easy and Open Way

1 Comment and 2 Shares

The biggest screen in your house would seem a logical place to integrate cloud apps, but TVs are walled gardens. While it’s easy enough to hook up a laptop or PC and pop open a browser, there’s no simple, open framework for integrating all that wonderful data over
the TV’s other inputs.

Until now. Out of the box, NeTV2’s “NeTV Classic Mode” makes short work of overlaying graphics on top of any video feed. And thanks to the Raspberry Pi bundled in the Quickstart version, NeTV2 app developers get to choose from a diverse and well-supported ecosystem of app frameworks to install over the base Raspbian image shipped with every device.

For example, Alasdair Allan’s article on using the Raspberry Pi with Magic Mirror and Google AIY contains everything you need to get started on turning your TV into a voice-activated personal assistant. I gave it a whirl, and in just one evening I was able to concoct the demo featured in the video below.



Magic Mirror is a great match for NeTV2, because all the widgets are formatted to run on a black background. Once loaded, I just had to set the NeTV2’s chroma key color to black and the compositing works perfectly. Also, Google AIY’s Voicekit framework “just worked” out of the box. The only fussy bit was configuring it to work with my USB microphone, but thankfully there’s a nice Hackaday article detailing exactly how to do that.

Personally, I find listening to long-form replies from digital assistants like Alexa or Google Home a bit time consuming. As you can see from this demo, NeTV2 lets you build a digital assistant that pops up data and notifications over the biggest screen in your house using rich visual formats. And the best part is, when you want privacy, you can just unplug the microphone.

If you can develop an app that runs on a Raspberry Pi, you already know everything you need to integrate apps into any TV. Thanks to NeTV2, there’s never been an easier or more open way to make the biggest screen in your house the smartest screen.

The NeTV2 is crowdfunding now at CrowdSupply.com, and we’re just shy of our first stretch goal: a free Tomu bundled with every board. Normally priced at $30, Tomu is a tiny open-source computer that fits in a USB Type-A port, and it’s the easiest way to add an extra pair of status LEDs to an NeTV2. Help unlock this deal by backing now and spreading the word!

Read the whole story
smarkwell
190 days ago
reply
PDX
Share this story
Delete
1 public comment
jepler
193 days ago
reply
I don't need an NeTV2 but I did impulse-buy some Tomu.
Earth, Sol system, Western spiral arm

Mission vs Strategy: Github and Open Source

1 Share

Today, Microsoft announced it’s 7.5 Billion dollar acquisition of Github. I want to start with a congratulations to my friends at Github — I think this is a great outcome, and Microsoft is probably one of the best long term homes for Github.

I want to address some of the negativity about the acquisition. Parts of the open source community are worried that Microsoft is going to ruin Github. I think these concerns are misplaced at a tactics and strategy level. However, at a mission level, I think it is important to compare Github to an open source foundation like the Apache Software Foundation1.

I’ve been a long term user of Github and an advocate for using it for many years. I’m also a member of the Apache Software Foundation. These are two of the juggernauts of the modern open source movement. They both are trying to encourage contributions to existing projects, they are both trying to get communities to live and grow on their platforms.

I think parts of the community is missing something important: The ASF and Github are alike in so many ways, but they have massively different missions.

Different Missions

Github was a venture backed, for-profit corporation. Github has used tactics and strategy to create returns is based on growing open source communities. This means as an open source community, you had a short term synergistic relationship with Github. The Github product helped your community grow and be productive. This is great. Github also had other strategies that leveraged it’s open source popularity to create business services and an enterprise on-premise product.

Github’s mission, as a for-profit corporation, is to generate a financial return. VC backing, which also mandates the generation of a return, only reinforces this mission.

Apache is a non-profit 501(c)(3) foundation. Its tactics and strategy are to provide services and support for open source communities. Seemingly, not that different from a community point of view in the short term.

Apache’s mission however, is to provide software for the public good. It’s literally the first line on the ASF About Page.

A mission is important. When people in a place change, so will the tactics and strategy.

100 years later

In 100 years, I hope the ASF is still a relevant way to provide software for the public good. I think there is a decent chance of this.

In 100 years, I don’t know if even the Microsoft brand will exist, let alone Github.

Github was never a replacement for the ASF, and at the same time, the ASF should learn from it. Github massively widened who contributes to open source. They made contributions easier. They innovated on what open source even means. They built an amazing product that I use every day.

The communities I’m part of, I believe will outlive Github. Communities can benefit from these for-profit endeavors. Synergy between their needs and the tactics of a for-profit company are good for the community. But as a community we must understand that we are part of the product, There is a benefit to the company for helping.

Github’s strategy including building an amazing product, but don’t confuse the missions.


[1]: I used the ASF as the primary example in this post, but you can swap ASF for Node Foundation, Linux Foundation, Free Software Foundation, etc. They all have broadly similar missions around producing software and supporting communities.

Read the whole story
smarkwell
194 days ago
reply
PDX
Share this story
Delete
Next Page of Stories