More BBC iPlayer Encryption FOI Materials

I received some further materials from the BBC about the iPlayer encryption issues. My request is covered in the BBCs’ Final Response. They released 2 documents, one I specifically asked for entitled “Pan-BBC Approach to Combating Piracy“, another entitled “Public/Press reaction to introduction of SWF Verification on iPlayer – Briefing Paper”.

They again denied my request for details on the mystery rights holders, however I have since noticed Alan Cox made a similarish request relating to the FreeView HD encryption and his response lists the following organisations as having indicated interest to the BBC in the Freeview HD encryption proposal:

  • ITV
  • C4
  • S4C
  • Five
  • BBC Worldwide
  • Disney
  • Fox Entertainment
  • Sony Pictures
  • Time Warner

I’ve sent the following reply to the BBC:

Hi,

Thank you very much for this. I am glad to hear the BBC intends
to publish a blog entry relating to these issues soon. It is very
much a goal of mine in all this to seek to provoke useful, productive
dialogue.

I am very disappointed that you have chosen to deny my request
for information on which organisations are making encryption
requirements on the BBC. I note that have you supplied such
information to an extremely similar request that covered the HD DVB
encryption, at:

http://www.whatdotheyknow.com/request/encryption_proposals_to_ofcom

I do not quite understand how the DVB-T2 EPG encryption issue is
different from the iPlayer encryption/DRM issue. I would ask you
review your decision in light of the above to ensure your decisions
are consistent.

I am also disappointed you were unable to supply further documents.
E.g. you must surely have recent documents covering the SSL/TLS
authentication encryption scheme for iPlayer, brought to public
attention recently with the launching of the iPad iPlayer. Such
documents seem to fairly clearly fall within the scope of my recent
requests, and yet somehow none of the documents I have received have
mentioned this scheme. I would ask you review your response to ensure
you have not accidently missed out such documents – it seems you must
have.

I thank you again for your time in all this. I apologise again for
the burden, but I stress again that I feel there is a strong public interest in this.

regards,

Paul Jakma

Comments (6)

Complaint to RBS Online about their browser whitelisting

As part of my progression to grumpy old githood, I present what may well be the first in an irregular series of letters of complaints to large, hard-of-hearing organisations. It might not change anything, but it makes me feel better.

Dear RBS,

This is at least the 3rd time in a number of years where I have had to email you to tell you that your policy of white-listing browsers by their user-agents strings is somewhat less than conducive to both my financial interests AND my security. To remind ourselves of when I first encountered this problem. I also emailed you earlier this year on the 10th march, and I think I have suffered this issue on other, unreported occasions.

Read the rest of this entry »

Leave a Comment

BBCs’ Most Favoured Devices

Here’s a list of some of the BBCs’ most favoured devices for internet TV (iTV?), for which the BBC will open the gate to its non-flash, HTML5 based version of iPlayer:

  • Sony PS3
  • Sony BluDisc
  • Cello TV
  • Apple iPad
  • (soon) Selected Windows CE devices

Notable devices/platforms not on this list:

  • Google Android
  • And all other HTML5 compliant devices

If you own such a device, then tough for you – even though the HTML5 iPlayer would work fine on such devices if it weren’t for the fact that the BBC have gone out of their way to bar those devices from it! Particularly tough for you if your device does not support Flash (e.g. Android). Best of all, the BBC straight-facedly claim:

“But we don’t back any one technical horse. We care about making our services available as widely as possible: for our audiences.”

This is highly misleading. For this goal clearly is utterly subservient to the BBCs’ greater goal of tightly controlling access to iPlayer, and denying iPlayer as much as possible to platforms which the BBC feels it can not control, as we know from FOI requests – referred to as “authentication” there, generally.  This has been the case ever since the BBC introduced a HTML+H.264 standards based iPlayer for the Apple iPhone, and then went out of its way to implement technical measure after technical measure to block this interface to other, unapproved platforms.

As per a previous blog post, it is a very unhealthy situation for the BBC to be able to pick & choose the “winners” according to its whims.

Comments (8)

Imagine if the BBC had stopped VHS

Recently I obtained from a FoI request to the BBC, a document entitled “Content Protection Update”. This document details issues that were put to BBC management regarding content protection, as well as minutes from an ERTG meeting. The document is about protecting BBC content, and preserving the BBCs’ ability to licence content from others. It contains this advice:

If we were to provide a version of iPlayer that was available to any device, then there would be nothing to stop an unapproved manufacturer creating a box that allowed these streams in perpetuity

Read that carefully. Given the heavy redactions, we unfortunately can not fully place that quote in its full context. However, I think we can glean that the BBC believes we should not be able to record its content. The BBC believes that it must have the power to control which devices may and which may not access BBC iPlayer. As I’ve argued before, such power to approve devices is anti-competitive and easily abused.

The current BBC management effectively believe the UK licence payer should not be able to do what it has been doing since the early 80s: to record TV for later viewing. The BBC would undo the effective rights the licence fee payer has had for 30 years, since the wide-spread availability of VHS. Without the ability to record TV arbitrarily, would people have invested in BBC-approved video tape technology? I would say not. In which case, the market in selling content to video-tape likely would never have happened. Would the BBC ever have developed its lucrative commercial market in selling its back-catalogue to VHS?

The immense irony here is that today’s BBC management, in acting to maintain tight control over content and hence over the devices that may play that content, are doing so in greater part to protect the £100m odd of profits generated by BBC Worldwide in 2009, £44m of which comes from DVD sales. A market which grew out of consumer technology developments, i.e. home video tape, that could only have been funded by the uncontrolled access to broadcast material.

Further, in acting to protect this £100m market, the BBC is making life difficult for the people who supply it with its £4.6bn of licence fee income. These people are not just those with such niche devices as Nokia or Android phones, who find they can not access BBC iPlayer. It is the entire fee-paying population which suffers, by being denied the wide range of innovative devices that might be available if it weren’t for the BBCs’ tight grip on which devices are and are not approved. Imagine we could have cheap PVRs for BBC, or digital TVs which transparently could show us shows either from over-the-air broadcast or from on-demand iPlayer. None of this exists widely, because it all requires the BBC to inspect and sign off on each and every device.

Indeed, worse, the BBC at present are proposing that they should be the ones who develop the devices too! So far as the future of TV is the internet, the BBC wishes to ensure that it has firm control over your TV!

In my opinion, the BBC should not be putting its interests in its commercial activities so far above the interests of the licence fee payer. Particularly not when the licence fee payer generates 400 times more revenue for the BBC. The BBC should not be acting as the gatekeeper to the TV market, broadcast or internet TV. I remain baffled as to why the BBC is acting in this way.

Comments (2)

BBC Response to my iPlayer DRM FoI Request

Got my response from the BBC regarding who is forcing them to implement content protection,  as their upper-management has publicly stated, and how they came to their decisions on it. It’s pretty disappointing really.

They’ve refused my request to determine exactly who is requiring it of the BBC, on the grounds that this information is held “for the purposes of ‘journalism, art or literature'”. The 2nd part they refused generally on the grounds of it being too burdensome – my request was phrased poorly it seems. They did release one document though, though heavily redacted.

Feel free to read the “final response”response and the released document, ERTG “Content Protection Update”.

Suggestions on what, if anything, to do next would be useful.

Comments (3)

Guest Blog: A Pilot’s View on Banner-Towing

Another guest blog from retired pilot Rudy Jakma. This on the topic of banner towing – where light-aircraft trail a banner behind them, to be read by people on the ground. Rudy has extensive experience of banner-towing, particularly from the earlier parts of his career, with around 4000 hours logged, and around 1000 banner pick-ups. His experience even pre-dates his flying career, as he worked as a 16-year old at a local aerodrome, helping to prepare banners to be picked up.

Last week, during the election campaign in the UK, a light aircraft crashed injuring the pilot and his passenger. It had been involved in a banner-towing flight on behalf of a political party, and the passenger, Nigel Farage, was an official of that party and standing as a candidate.

On Sky News, an “expert” proclaimed that towing a banner would have put the aircraft under abnormally high stress, increased by the weight of the passenger and that this would have meant that the flight (which sounded as if he meant the entire flight) was more hazardous due to the presence of the banner. Where do they get these experts from?

From the TV footage of the wreckage I identify the aircraft as a Polish-built Wilga. This is a purpose-built utility aircraft. Towing objects, be they banners or gliders, is well within its structural capability.

All aircraft used for towing must be approved according to manufacturers’s recommendation and are fitted with quick-release tow hooks. The aviation authorities will follow manufacturers’ guidelines and may impose further limitations before they approve a certain aircraft type for towing operations. This ensures that towing an object will never allow stresses to be imposed on the structure that could compromise the structure in any way, be it due to deformation, stress or metal fatigue. The limitations will be published in the aircraft operating manuals and, if deemed necessary, also posted on placards in the cockpit.

Read the rest of this entry »

Comments (6)

Guest Blog: Pilot’s View on the Volcanic Air Travel Disruption

A topical guest blog entry from Rudy Jakma, a recently retired commercial pilot, with 22,000 hours of experience on a wide range of aircraft, many of them on jets.

For four days air ravel all across Europe has been severely disrupted by clouds of ash from a volcanic eruption in Iceland. A stable area of high pressure steered the ash clouds first towards the British Isles. Then it drifted slowly towards the north of continental Europe where is lingers. Continued volcanic activities are feeding the cloud which so far has showed but little sign of dispersing.

The problem is that very little is known about the effects of volcanic ash on aircraft. Only a few have resulted in recorded incidents, the best known the case where a British Airways Boeing 747, commanded by Capt. Eric Moody, without warning flew into the ash cloud of a volcanic eruption of the Krakatau and lost power of all 4 engines. It resulted in a celebrated triumph of excellent crew management and airmanship. Three engines were evenutally restarted. The aircraft landed safely but was severely damaged.  It is my guess that many aircraft, before and since, may well have flown through the residue of a volcanic eruption without ever being aware of it.

The problem facing Civil Aviation Authorities and airlines is that there are little or no statistical or scientific data available on the effect of volcanic ash on aircraft – other than the few cases that resulted in incidents. These incidents demonstrated beyond a shadow of doubt that volcanic ash can pose a severe potential hazard to aviation. These are the worst-case scenarios. The dividing line between acute danger and a minor risk is anybody’s guess. This is the reason why the authorities see no choice but to err on the side of safety.

The majority of aircraft  used by the airlines to-day are equipped with jet engines. They are at their best effciency at higher altitudes. In general above 30,000 feet. This is where, according to reports, the ash cloud lingers. They fly at speeds of around 500 mph which turns the minuscule ash particles into sand blasting. It can damage every part of the aircraft facing the airstream: Wings, engines and windscreens.

Jet engines suck in enormous quantities of air. Part of which goes through the combustion chambers and part is bypassed around the engine to give a large volume of gases leaving the exhaust at high speed to propel the aircraft. The compressor at the front of the engines has a number of blades that have a critical aerodynamic shape. The ash particles can damage the compressor blades, altering their shape and making the engine less efficient. The ash going through the combustion chambers may melt and form a deposit on the turbine blades. The result will at best be a heavy repair bill, at worst a write-off of the engines. Perhaps even of the entire aircraft.

Even though this will probably not result in a crash, safety regulations prescribe that an airliner MUST be able to go around if conditions prevent a landing. If engine power is compromised, a go-around may be impossible. Damaged turbine blades may even result in so-called  “compressor stall” if full power is selected. This in turn could lead to a “flame-out” of the engines. Losing all engine power at a critical moment is too great a risk to even contemplate.

Aviation has built an excellent safety record. A lot is based on the principle of redundancy and alternatives. Knowingly flying into an area where abrasion damage could degrade aircraft performance to an unknown extend is unacceptable. This consideration in itself can make departure of an airline flight with passengers illegal.

There are good reasons why authorities have decided to close a large – and expanding – portion of European airspace closed. However, the effect is already wreaking havoc with travellers. Passengers are stranded, many others are faced with cancellation of their plans. Many have booked and pre-paid holidays. Train and ferry services are filled to capacity and have long waiting lists, especially cross Channel services and ferries across the Irish Sea.

The loss of revenue, combined with the cost of accommodating stranded passengers could result in the bankruptcy of several airlines if this situation continues much longer. The survivors could well become the dominant players when – to use a very corny joke – the dust has settled. The landscape may well be very different after aviation resumes – whenever that may be.

Airlines already work on a wafer-thin margin. Regular airlines will have to look after thousands of stranded passengers, They also employ a large number of ground staff.  An event like this does not feature in the business model of any airline. Budget airlines in general do not look after their passengers in the same manner. They also have in comparison a lot less overhead.

It will be interesting to see who will come out the winners. I do not think it is coincidental that Ryanair were the first airline to cancel all flights for a longer period than most other airlines. They immediately reduced their exposure to being obliged to look after their passengers. Having a much leaner payroll, it is my guess that Michael O’Leary made a calculated gamble and battened down the hatches early. The Irish government may not have a choice but to seek EU approval for Ryanair to take over Aer Lingus. If there will be anything left to take over!

There will be more victims, and survivors.

Leave a Comment

Letter to the BBC Trust regarding concerns with the BBC iPlayer

This is an edited version of a submission of mine to the BBC Trust, for its recent review of BBC ondemand services.

Overview

The BBC has, no doubt in good faith, acted to frustrate a number of legitimate 3rd party clients for iPlayer which were created outwith the BBC. I believe this is not in keeping with the Charter for the BBC, nor with its remit for BBC Online, with regard to its obligations to provide broad access to all UK residents.

Further, in building BBC iPlayer on the proprietary technology of a single-vendor I believe the BBC helps promote that technology and that single vendor. I believe this distorts the market by requiring that all iPlayer access devices must build on “Flash”. I believe the BBC is acting anti-competitively by restricting 3rd party entrepreneurs from having a choice of technology vendors. Further, due to certain measures, the BBC has managed to make itself the arbiter of who may and who may not access the service.

I believe this situation is unhealthy and damaging to the public interest. I believe in fact there already are examples of the BBC having misused this power contrary to the public interest.

I would urge the BBC Trust to require the BBC to re-examine its iPlayer technologies and:

  • Investigate the use of multi-vendor, open standards, including HTML5 video
  • Engage with vendors and other interested parties in order to draw up, in an open and transparent manner, whatever further protocols are required (e.g. content rights protection protocols) to allow truly neutral access to BBC iPlayer
  • Eliminate the reliance on proprietary, single-vendor technology.
  • Draw down its development of end user/device iPlayer clients, and transfer all possible responsibility for device development to the free market, as is right and proper, and normal for other services such as radio and TV (both analogue and digital)

Background

Flash

As you are aware, the BBC chose to implement its iPlayer catchup/ondemand system on top of the “Flash” platform. This is a technology of Adobe Inc, a US corporation.

While some specifications for “Flash” have been published by Adobe, several key aspects of “Flash” remain undocumented and are highly proprietary to Adobe – particularly the RTMPE and RTMPS protocols, which the BBC use in iPlayer. Further, there are no useful implementations of the current version of “Flash” technology other than those offered by Adobe or that are licenced from Adobe, that I am aware of.

Adobe make end-user clients freely available for a number of popular operating systems and computers, such as Microsoft Windows on PC (32 and 64), OS-X on PPC/PC, Linux on 32bit PC. Adobe also licence versions of “Flash” to embedded-computer device manufacturers. It is unclear to me whether Adobe require royalties on such players, however from experience Adobe do charge an initial porting fee.

Competing video content delivery technologies include:

  • HTML5 video
    • Multi-vendor, industry standard, openly specified by the W3
    • Implemented in more recent versions of Microsoft Explorer (IE9), Google Chrome, Firefox
    • A number of online video sites have beta-tests of HTML5, including Youtube
    • NB: Firefox bundles only Ogg Theora codec support at this time
  • Microsoft Silverlight
    • Openly specified, with the usual patent issues around video.
    • 2nd source implementation available from the Mono project.
    • Has had some high profile deployment.

3rd Party iPlayer Clients

Since the iPlayer service was started, a number of 3rd party clients were created. These included a plugin for “XBMC”, a software solution to turn a PC into a “media centre”; and “get_iplayer” which is a command line tool to allow Unix/Linux users to access iPlayer.

Both these clients were respectful of the BBCs’ content protection policies and restrictions and were careful to honour them. These clients were also subject to the BBCs “geographic IP checks” (geo-IP) which allowed the BBC to ensure users of tools were in the UK (to the same extent as it could ensure this for official BBC iPlayer). These tools did not rely on Adobe technology, and their source is freely licenced. Some users perceived a number of technical advantages, such as being better integrated with their respective environments, having lower CPU usage, etc.

At several stages, the BBC made technical changes to the service which appeared to have no purpose but to frustrate unofficial, 3rd party clients. E.g. the BBC started requiring certain HTTP User-Agent strings.

For quite a while though, the BBC made no further changes, and the clients concerned worked very well for their users. Until very recently when the BBC implemented a protocol which requires that the client “prove” that it is the official BBC iPlayer client by providing a numeric value that is calculated from the binary data making up the BBC iPlayer client and from data provided by the server. This numeric value can obviously not be easily provided by 3rd party clients, unless they first download the official iPlayer SWF file from the BBC of course.

These measures taken by the BBC objectively do not add any immediate end-user value to the service. These measures objectively do interfere with the ability of 3rd party clients, being used legitimately by UK licence fee payers, to access the service. These measures have indeed reduced the base of clients able to access iPlayer,  with the developers of get_iplayer having decided to abandon further work due to repeated frustratative measures taken by the BBC.

Concerns

Concern 1: Public Value and Emerging Communications

In the licence for BBC Online, the Trust asked the following of the BBC with regard to emerging communications, in part II, 5.6:

BBC Online should contribute to the promotion of this purpose in a variety of ways including making its output available across a wide range of IP-enabled platforms and devices.

Additionally, the BBCs’ general Charter Agreement states, in clause 12, (1): “must do what is reasonably practicable to ensure that viewers … and other users … are able to access UK Public Services“.

My concern is that the BBC had made available a service, which was being used by UK licence fee payers, in full compliance with all usage restrictions, and it then took technical steps aimed directly at blocking access to the service. To me it is most contrary to the BBCs remit and its Charter agreement to act to block legitimate access!

I believe this one example of an abuse, though well-intentioned and in good faith, of the power the BBC has acquired by gaining control over the end-user platform (i.e. by restricting access to BBC iPlayer to the BBCs’  Flash-based player, and to a select, approved device-specific clients, such as the Apple iPhone).

Further, Adobe Flash is a roadblock to wide access. Whereas multi-vendor standards tend to be widely available across devices, Adobe Flash is not. This can be because certain embedded system vendors can not afford the moneys required to get Adobe’s attention to port Flash to new platforms (from experience, it’s a not insubstantial amount even for large multi-nationals; which can be an impossibly huge amount for small, low-budget startups).

In essence, the single-vendor nature of Flash is a significant obstacle to the goal of wide deployment over a range of platforms. With all the best will in the world, it is not within the resources of Adobe alone to support all the different kinds of things people would like to play video on, at a price suited to small startups.

Similarly, it is not within the resources of the BBC to properly engage with every aspiring builder of iPlayer access devices.

Concern 2: Market Impact

The BBC Online licence asks the BBC to consider the market impact, in part I, 1:

BBC Online should, at all times, balance the potential for creating public value against the risk of negative market impact.

I will set out my stall and say that I believe that in many spheres of human activity the public interest is best served through free market activity, in a reasonably unrestricted form. While I cherish the BBC for its content, and I agree with its public service remit, I do not believe the public interest is served by having the BBC dictate the end-user platform. Rather, I believe software and device vendors should be free to create and develop products, and the public to pick which is best.

The BBC have managed unfortunately to acquire the power to dictate whose devices and software do and do not have access to iPlayer. It has managed to do so under the guise of “content protection”. Under this pretext the BBC has persuaded the BBC Trust that, unlike prior modern history, the BBC alone must provide or approve of all code for iPlayer viewing that runs on end-user devices. The BBC Trust has gone along with this on a temporary basis.

This gives the BBC an unprecedented power. Which it has not had in any other area of end-user device technology in working memory.

As detailed above, the BBC has already wielded this power to cut-off clients which it does not approve of – though these clients and their users were acting as legitimately as any other. The BBC claims this is a necessary requirement. However, there is no reason why the BBC can not, in conjunction with the internet community and interested vendors, prescribe a standard way for clients to implement the BBCs’ content protection. Such a way that would afford the BBC all the same protections as it has at present, both technical and legal such as under the Copyright &Related… Act 2003, while allowing 3rd party access on a neutral basis.

There is no need for the BBC to be the king-maker of the device market, and the BBC should not have this power. The BBC should not be allowed to build a new empire over end-user devices. It should be scaling down its iPlayer client development – not up! It should focus on back-end delivery technologies, and provide a standards specified interface to those technologies, just as it does for over-the-air broadcast with DVB-T and DVB-T2, and its participation in setting those standards.

With regard to Adobe. Even if the BBC did not make use of Flash to restrict 3rd party iPlayer clients, the reliance on a single-vendor is a problem. Adobe may have a reasonably good history of being open and providing royalty free clients for most popular clients. However, Adobe are a for-profit US corporation, and so they in no way are beholden to the UK public interest. They would be perfectly entitled to start charging royalties in the future if they wished. They have charged device manufacturers royalties in the past in certain scenarios (exactly which is not public, to the best of my knowledge).

Further, Adobe are but a single software company. As stated previously, they do not have the resources to work with every little TV/STB startup. As such, they apply basic economics and raise their fees for enabling Flash on new embedded platforms (porting fees), to the point that demand diminishes to a manageable point. This means small startups may be excluded from enabling iPlayer on their devices.

Finally, the internet has been developing multi-vendor standards for video, as part of HTML5. It is reasonably widely acknowledged that multi-vendor standards are more favourable to the public interest than single-vendor proprietary technologies, all other things being equal.

As such, the BBC ought to have a duty to favour the use of multi-vendor, openly specified standards – to advance the public good.

Conclusion

My conclusions are as given in my introduction. I believe the BBC has, albeit in good faith, misused the control it has gained over end-user platforms with iPlayer by deliberately interfering with usage of iPlayer by legitimate clients.

I believe the BBC, no doubt in good faith and with all good intention, has made a mistake in choosing to build its iPlayer video delivery systems on the proprietary technologies of a single-vendor. I believe that these acts and decisions are not in keeping with the Charter, nor with its remit for BBC Online or iPlayer. I believe the public interest demands that the BBC should seek to step back from its current heavy involvement on the intimate mechanics of how end-user devices access iPlayer.

The BBC instead should publish technical documents describing the protocols used to access its services. It should engage all interested parties to formulate and develop any required new protocols, in an open and transparent manner. This is similar to how the BBC acts in other areas, such as with DVB.

I think this is required in order to allow the free market to be able to innovate and so create a range of exciting and useful iPlayer clients.

I think it is vitally important to all this that the BBC use openly specified and published protocols, available on non-discriminatory terms that are implementable by royalty-free software. Such software often forms the infrastructure for future innovation.

Comments (6)

Network Activity Visualisations

I’m tinkering on a network protocol simulator at the moment. For debug purposes, it can provide some basic visualisation of what’s going on, e.g. highlighting links which are transmitting messages. These visualisations can sometimes be mesmerising, I find. To avoid turning the front page of this blog into a blinking mess, I’ve put them on their own page.

Comments (2)

Making the Internet Scale Through NAT

(just want to dump this here for future reference)

It’s widely acknowledged that the internet has a scaling problem ahead of it, and an even more imminent addressing problem. The core of internet routing is completely exposed to peripheral growth in mobile/multihomed leaf networks. Various groups are looking at solutions. The IRTF’s RRG has been discussing various solutions. The IETF have a LISP WG developing one particular solution, which is quite advanced.

The essential, common idea is to split an internet address into 2 distinct part, the “locator” and the “ID” of the actual host. The core routing fabric of the internet then needs only to be concerned with routing to “locator” addresses. Figuring out how to map onward to the (presumably far more numerous and/or less aggregatable) “ID” of the specific end-host to deliver to is then a question that need only concern a boundary layer between the core of the internet and the end-hosts. There are a whole bunch of details here (including the thorny question of what exactly the “ID” is in terms of scope and semantics) which we’ll try skip over as much as possible for fear of getting bogged down in them. We will note that some proposals, in order to be as transparent and invisible to end-host transport protocols as possible, use a “map and encap” approach – effectively tunneling packets over the core of the internet. Some other proposals use a NAT-like approach, like Six-One or GSE, with routers translating from inside to outside. Most proposals come with reasonable amounts of state, some proposals appear to have quite complex architectures and/or control planes. E.g. LISP in particular seems quite complex, relative to the “dumb” internet architecture we’re used to, as seems to try to solve every possible IP multi-homing and mobile-IP problem known to man. Some proposals also rely on IPv6 deployment.

Somewhat related proposals are shim6, which adds multi-homing capable “shim” layer in between IP and transport protocols like TCP (or “Upper Layer Protocols” / ULPs), and MultiPath-TCP (MPTCP), which aims to add multi-pathing extensions to TCP. Shim6 adds a small, additional state machine to whatever state machines the ULPs require, and is not backwards compatible with non-shimmed hosts (which isn’t a great problem of itself). Shim6 requires IPv6. MPTCP is still in a very early stage, and does not appear to have described any possible proposals yet.

Not too infrequently well-engineered solutions to some problem may not quite have a high enough unilateral benefit/cost ratio  for that solution to enjoy widespread adoption. Then cheap, quick hacks that address the immediate problem without causing too many problem can win out. The clear example in the IP world being NAT. It is therefore interesting to consider what will happen if none of the solutions currently being engineer, mentioned above, gain traction.

E.g. there are quite reasonable solutions which are IPv6 specific, and IPv6 is not exactly setting the world alight. There are other proposals which are v4/v6 agnostic, but require a significant amount of bilateral deployment to be useful, e.g. between ISP and customers, and do not have any obvious immediate advantages to most parties. Etc. So if, for whatever reasons, cost/benefit ratio means none of these solutions are rolled out, and the internet stays effectively v4-only for long enough, then the following will happen:

  1. Use of NAT will become ever more common, as pressure on IPv4 addresses increases
  2. As the use of NAT increases, the pressure on the transport layer port space (now press-ganged into service as an additional cross-host flow identifier) will increase, causing noticeable problems to end-user applications.
  3. As NAT flow-ID resources become ever more precious, so applications will become more careful to multiplex application protocols over available connections.
  4. However, with increased internet growth, even NAT will become more and more a luxury, and so ever more applications will be forced to rely on application layer proxies, to allow greater concentration of applications to public IP connections – at which stage you no longer have an internet.

The primary problems in this scenario are:

  • The transport protocol’s port number space has been repurposed as a cross-host flow-ID
  • That port number space is far too constraining for this purpose, as it’s just 2 bytes

The quickest hack fix is to extend the transport IDs. With TCP this is relatively easy to do, by defining a new TCP option. E.g. say we add a 2 * 4-byte host ID option, one ID for src, one for the dst. When replying to a packet that carried the option, you would set the dst to the received src, obviously. This would give NAT concentrators an additional number space to help associate packets with flows and the right NAT mappings.

This only fixes things for TCP, plus the space in the TCP header for options is becoming quite crowded and it’s not always possible to fit another 2*4+2 bytes in (properly aligned). IP also has an option mechanism, so we could define the option for IP and it would work for all IP protocols. However, low level router designers yelp in disgust at such suggestions, as they dislike having to parse out variably placed fields in HDL; indeed it is common for high-speed hardware routers to punt packets with IP options out to a slow, software data-path. The remaining possibility is a Shim style layer, i.e. in between IP and the ULPs, but that has 0 chance of graceful fallback compatibility with existing transports.

Let’s go with the IP option header as the least worst choice. It might lead to slower connectivity, but then again if your choice is between “no connectivity, because a NAT box in the middle is maxed out” and “slow connectivity, but connectivity still” then its better than nothing. Further, if such use of an IP option became wide-spread you can bet the hardware designers would eventually hold their noses and figure out a way to make common case forwarding fast even in its presence – only NAT concentrators would need to care about it.

Ok, so where are we now? We’ve got an IP option that adds 2 secondary host IDs to the IP header, thus allowing NAT concentrators to work better. Further, NAT concentrators could now even be stateless when it comes to processing packets that have this option. All it has to do is copy the dst address from the option into the IP header dst. This would allow the NATed hosts to be reachable from the outside world! The IDs don’t even have to be 4 byte each, per se.

Essentially you now have split addressing into 2. You could think of it as a split in terms of “internet” or “global network ID” and “end-host” ID, a bit like 6-to-1 or other older proposals I can’t remember the name of now, however it’s better to think of it as being 2 different addressing “scopes”:

  • The current forwarding scope, i.e. the IP header src/dst
  • The next forwarding scope, i.e. the option src/dst, when the the dst differs from the current scope dst

NAT boxes now become forwarding-scope context change points. End-host addressing in a sense can be thought of as end-host@concentrator. Note that the common, core scope of the internet can be blissfully unbothered by any details of the end-host number space. Indeed, if there’s no need for cross-scope communication then they can even use different addressing technologies, e.g. one IPv4 the other IPv6 (with some modification to the “copy address” scheme above).

Note that the end-host IDs no longer need be unique, e.g. 10.0.0.1@concentrator1 can be a different host from 10.0.0.1@concentrator2. In an IPv4 world, obviously the end-host IDs (or outside/non-core scope IDs) would not be globally unique. However, this scheme has benefits even with globally unique addresses, e.g. public-IP@concentrator still is beneficial because it would allow the core internet to not have to carry the prefixes for stub/leaf public IP prefixes.

You could take it a bit further and allow for prepending of scopes in this option, so you have a stack of scopes – like the label stack in MPLS – if desired. This would allow a multi-layered onion of forwarding: end-host@inner@outer. Probably not needed though.

What do we have now:

  • An IP option that adds a secondary addressing scope to packet
  • A 2-layer system of forwarding, extendible to n-layer
  • Decoupling of internet ID space from end-host ID space
  • Compatible with existing ULPs
  • Backwards compatible with IPv4 for at least some use cases
  • Allows NAT to scale
  • Relatively minor extension to existing, widely accepted and deployed NAT architecture
  • Allows scalable use of PI addressing
  • No per-connection signalling required
  • Minimal state

On the last point, the system clearly is not dynamic, as Shim6 tries to be. However, if the problem being solved is provider independence in the sense of being able to change providers without having to renumber internally, then this hack is adequate. Further, even if the reachability of network@concentrator is not advertised to the internet, the reachability of concentrator must be advertised at least. So there is some potential for further layering of reachability mechanisms onto this scheme.

No doubt most readers are thinking “You’re clearly on crack“. Which is what I’d have thought if I’d read the above a year or 5 ago. However, there seems to be a high-level of inertia on today’s internet for solving the addressing crunch through NAT. There seems to be little incentive to deploy IPv6. IPv6 also doesn’t solve multi-homing or routing scalability, the solutions for which all have complexity and/or compatibility issues to varying degrees. Therefore I think it is at least worth considering what happens if the internet sticks with IPv4 and NAT even as the IPv4 addressing crunch bites.

As I show above it is at least plausible to think there may be schemes that are compatible with IPv4 and allow the internet to scale up, which can likely be extended later to allow general connectivity across the NAT/scope boundaries, still with a level of backwards compatibility. Further, this scheme does not require any dynamic state or signalling protocols, beyond initial administrative setup. However, the scheme does allow for future augmentation to add dynamic behaviours.

In short, schemes can be devised which, while not solving every problem immediately, can deliver incremental benefits, by solving just a few pressing problems first and being extendible to the remaining problems over time. It’s at least worth thinking about them.

[Corrections for typos, nits and outright crack-headedness would be appreciated, as well as any comments]

Comments (4)

« Newer Posts · Older Posts »