SRAMs’ wireless bicycle gear shifting: Protocol analysis

It’s pretty rare that I get to blog about both cycling and networking. Hard to see how those two topics share any common ground. That’s about to change as SRAM, the American bicycle component maker, appear to have a wireless gear shifting system in advanced testing.

The system was seen at the Tour of California earlier this year, though disguised with decoy wires to make it seem like a wired, electronic system. It was also spotted recently at the USA Pro Cycling Challenge, now without the decoy wires.

You may ask, why on earth would bicycles need radio-controlled gear shifting? Which is a good question. One benefit seems to be that the wireless shifting weighs slightly less than either a conventional mechanically actuated system, or even a wired electrical one. However, that benefit surely will be very marginal. The lack of cabling may make installation and maintenance easier too. Perhaps. Another possibility is that SRAM were behind Shimano and Campagnolo in delivering an electronically controlled shifting system, and SRAM decided that going wireless would help differentiate their product from the other two. Who knows!

As someone who specialises in network protocols, I’m curious to know how it might work in terms of the messages sent, logical layers of the radio protocol, and how the different components communicate to co-ordinate shifting.

There are obvious questions: Is it secure? Are there any limitations? Will it be reliable? The brief answers to which are, yes it should be secure; yes, there seem to be some limitations over traditional shifting systems; and probably it should be quite reliable, however when it isn’t there could be quite strange behaviour.

The rest of this blog will go into the details of how this system works at a network protocol level, as gleaned from SRAMs’ patent on electronic shifting. It will look at the ramifications of the design decisions made, and also how this could affect the operation of the system in extreme circumstances where messages are lost due to, e.g., radio noise.

The quick summary:

The SRAM system, at least as described in their patent, has a number of limitations that are a consequence of the network protocol:

  • It can not support more than two front chain rings
  • It can not support intentional simultaneous shifting of front and rear dérailleurs

While radio noise is unlikely to be a significant problem, outside of deliberate interference or some unusual locations, should the system be affected by continued noise it may manifest itself as:

  • Missed rear shifts followed by an overshift of the rear
  • Overshifting rear shift, after an apparently normal front shift
  • Combined unintentional shift of both dérailleurs, on an intended front shift

The system should though be relatively robust against interference, due to its use of a low-bitrate network layer. It should be secure, due to use of strong encryption, so that only those with physical access to the bicycle (e.g. the rider) can control the gear system.

The system that goes into production may differ from the system described in SRAMs’ patent, and some or all of of these limitations may not apply to it. We won’t know till it is released.

Read the rest of this entry »

Comments (9)

Graph distance using matrix multiplication in Octave

If A is a matrix of positive edge weights between nodes in a graph (from 0 to infinity), then power-iteration through multiplication of that matrix in the {R,min,plus} semi-ring will converge on the distances of the shortest-paths in that graph. For reference (my own mostly), this blog entry gives the GNU Octave code for it and an example.

Read the rest of this entry »

Leave a Comment

Transport Scotland and its attitude to active transport spending

Transport Scotland had planned to spend £27m on low-carbon, sustainable and active transport, of which cycling is but a part, over  3 years (I presume roughly from 2013 to 2015?). Which makes the annual budget be £9m.

Now, if you were about to hand out £424k of that £9m – just about 5% of the entire annual budget for low-carbon, sustainable, and active transport – you’d do your homework on it, wouldn’t you? You might want a detailed proposal, with goals and metrics, perhaps? You’d want to see some detailed proposals for what the campaign might cover, no? You’d possibly need some back and forth to give feedback and work out the details, which’d generate minutes and emails, right? Surely?

Not if you’re Transport Scotland. No. Transport Scotland, it seems, will hand out £424k – again that’s pretty much 5% of the entire annual, active transport budget – based on nothing more than barely 2-pages of a proposal. A proposal with the scantest of details, and which couldn’t have taken more than 30 minutes to write up. That’s all it takes to get £424k from Transport Scotland, at least in terms of anything that leaves a record, apparently.

This is Transport Scotland’s official stance, made in response to my Freedom of Information request about the commissioning of the Nice Way Code, which they re-iterated to the Scottish Information Commissioner, after I appealed on grounds of incredulity.

I think this stinks of a somewhat cavalier, uncaring, and dismissive attitude both to public money, and to active transport policy in Scotland. I would expect Transport Scotland to be a lot more careful with 5% of their budget for active transport.

I think that’s fairly scandalous.

Leave a Comment

The surprising centres of the Internet

A previous post, on “Barabási – Albert preferential attachment and the Internet“, gave a plot of  the Internet, as a sparsity map of its regular adjacency matrix, with the axes ordered by each ASes  eigencentrality:

Sparsity plot of the Internet adjacency matrix, with the nodes ordered by their Eigencentrality ranking.

Sparsity plot of the Internet adjacency matrix, for the UCLA IRL 2013-06 data-set, with the nodes ordered by their Eigencentrality ranking.

Each connection in the BGP AS graph is represented as a dot, connecting the AS on the one axis to the AS on the other. As the BGP AS graph is undirected, the plot ends up symmetric. The top-right corner of this plot shows that the most highly-ranked ASes are very densely interconnected. The distinct outline probably is indicative (characteristic?) of a tree-like hierarchy in the data.

Who are these top-ranked ASes though? Are they large, well-known telecommunications companies? The answer might be surprising.

Read the rest of this entry »

Leave a Comment

Code and error handling strategies and FSMs

We’ve had a discussion in the office on and off over a few days about error handling in programmes. Neither my office mate nor I are completely happy with what’s available out there. The state of the art in imperative/OO languages appears to be stuck at exceptions. The new Go programming language eschews them, essentially going back to having a named parameter for returning errors – which needs to be checked for.

The essential problem in programming is, of course, that code may follow a number of paths. Some of those paths are, at some level, the most common or desired paths (often, just 1 path). Other paths may be taken, e.g. because a resource was unavailable or because the requested action was inappropriate or unreasonable in some way. Of those other paths, some may still be reasonably commonly taken (e.g. a file not existing) and/or recoverable in some way. Other paths may be exceptional in some way such that it is unlikely  to recover from having taken them. These paths converge at function/procedure call nodes, where they have to be dispatched to direct programme flow onward to the appropriate further path.

My own sense is that path selection syntax/support in programming languages tends to divide all error/return handling into 1 of those 3 cases.

  1. The primary path(s)
  2. Recoverable error paths
  3. Unrecoverable error paths

What then are the ways to handle these cases?  In languages such as C, errors generally must be handled after each function call, acting on them according to return value or a modified parameter. There are of course other ways, such as signals and exceptions.

There is also another way, which I see much less often, but which has certain benefits, which is to use Finite State Machines (FSMs) to give objects an idempotent error state. This allows control-flow in calling code to bunch together operations without having to worry about errors, as the error-handling can be done later, separately.

This blog tries to go over these methods, and their benefits and pitfalls. I’d also be really interested to hear about other error-handling strategies, especially better ones! :)

Read the rest of this entry »

Comments (3)

Barabási – Albert preferential attachment and the Internet

The Barabási – Albert paper “Emergence of Scaling in Random Networks” helped popularise the preferential-attachment model of graphs, and its relevance to a number of real-world graphs. There’s a small, somewhat trivial tweak to that model that can be made which never the less changes its characteristics slightly, with the result possibly being more relevant to the BGP AS graph. GNU Octave and Java implementations are given for the original BA-model and the tweak. The tweak can also potentially improve performance of vectorised implementations, such as Octave/Matlab.

Read the rest of this entry »

Comments (1)

Are IPv6 addresses too small?

IPv4 addresses are running out. IPv6 seems to offer the promise of a vast, effectively limitless address space. However, is that true? Could it be that even IPv6 addresses are too limited in size to allow potential future routing problems to be tackled? Maybe, if we want scalable, efficient routing. How could that be though?

IPv4 addresses are all but exhausted, IANA having run out in 2011, APNIC shortly thereafter and RIPE in 2012.The idea is that we switch to IPv6, overcoming the obstacles to transition that have now been engineered into it (e.g. usually a completely unrelated address space from IPv4, so significantly hampering IPv6 from being able to make transparent use of existing IPv4 forwarding capabilities).

IPv6 addresses are 128 bits in length, compared to the 32 bits of IPv4. So in theory IPv6 provides a massive address space compared to IPv4, of 2128 addresses. Exactly how massive? Well, some have calculated that that is enough to assign an IPv6 address to every atom on the surface of the earth, and still have addresses to spare. If every IPv6 address was assigned to a node with just 512 bytes of memory, then some have calculated the system in total would require more energy than it would take to boil all the oceans on earth – presuming that system exists only as pure energy! So by those estimates, the IPv6 address space seems like it is so massive that we couldn’t possibly ever come close to using all of it, least not while we’re confined to earth.

Those estimates however assume the addresses are used as pure numbers with perfect efficiency. In reality this will not be the case. To make computer networking efficient we need to encode information into the structure of the addresses, particularly so when those networks become large. The lower, least-significant 64 bits of an IPv6 address are effectively reserved for use on local networks, to allow for auto-configuration of IPv6 addresses on ethernets. Changing this would be quite difficult. So IPv6 actuall;y offers only 64 bits to distinguish between different networks, and then a further 64 bits to distinguish between hosts on a given network. Further, of the upper, most-significant 64 bits (i.e. the “network prefix” portion of an IPv6 address), already at least the first, most-significant 3 bits are taken up to distinguish different blocks of IPv6 space. This means the global networking portion of the IPv6 address space only has about 61 bits available to it.

IPv4 and IPv6 BGP FIB sizes: # of prefixes on a logarithmic scale. Courtesy of Geoff Huston.

Why could this be a problem? Isn’t 61 bits more than enough to distinguish between networks globally (i.e., as on the Internet)? Well, that’s an interesting question. The global Internet routing tables have continued to grow at super-linear pace, as can be seen in the plot above (courtesy of Geoff Huston), with at least quadratic growth and possibly exponential modes of growth. IPv4 routing tables have continued to grow despite the exhaustion of the address-space, by means of de-aggregation. IPv6 routing tables are also growing fast, though from a much smaller base. There have been concerns over the years that this growth might one day become a problem, that router memory might not be able to cope with it, and so that could bottleneck Internet growth.

At present with BGP, every distinct network publishes its prefix globally and effectively every other network must keep a note of that. Hence, the Internet routing tables grow in direct proportion to the number of distinct networks in number of entries. As each entry uses an amount of memory in logarithmic proportion to the size of the network, the routing table growth is slightly faster than the size of the network in terms of memory. So as the Internet grows at a rapid pace, the amount of memory needed at each router grows even more rapidly.

If we wanted to tackle this routing table growth (and it’s not clear we need to), we would have to re-organise addressing and routing slightly. Schemes where routing table memory growth happens more slowly than growth of the network are possible. Hierarchical routing schemes existed before, and BGP even had support for hierarchical routing through CIDR and aggregation. However, as hierarchical routing could lead to very inefficient routing (packets taking very roundabout paths, compared to the shortest path), other than in carefully coördinated networks, it never gained traction for Internet routing, and support for aggregation has now been deprecated from BGP. Other schemes involving tunnelling and encapsulation are possible, but they suffer from similar, intrinsic inefficiencies.

A more promising scheme is Cowen Landmark Routing. This scheme guarantees both sub-linear growth in routing table sizes, relative to growth of the network and efficient routing. In this scheme, networks are associated with landmarks, and packet addresses contain the address of the landmark and the destination node. One problem with turning this into a practical routing system for the Internet is the addressing. How do you fit 2 network node identifiers into an IPv6 address? Currently organisations running networks are identified by AS numbers, which are now 32-bit. However, 2×32 bit = 64 bits, and there are no more than 61 bits available at present in IPv6 for inter-networking.

Generally, any routing scheme that seeks to address routing table growth is near certain to want to impose some kind of structure on the addressing in similar ways. The 64 bits of IPv6 doesn’t give much room to manoeuvre. Longer addresses or, perhaps better, flexible-length addresses, might have been better.

Are IPv6 addresses too short? Quite possibly, if we ever wish to try tackle routing table growth while retaining efficient routing.

Comments (4)

Older Posts »
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: