I've mentioned a bunch of times on the time-nuts list that I'm quite fond of the Spectracom 8140 system for frequency distribution. For those not familiar with it, it's simply running a 10MHz signal against a 12v DC power feed so that line-powered pods can tap off the reference frequency and use it as an input to either a buffer (10MHz output pods), decimation logic (1MHz, 100kHz etc.), or a full synthesizer (Versa-pods).
It was only in October last year that I got a house frequency standard going using an old Efratom FRK-LN which now provides the reference; I'd use a GPSDO, but I live in a ground floor apartment without a usable sky view, this of course makes it hard to test some of the GPS projects I'm doing. Despite living in a tiny apartment I have test equipment in two main places, so the 8140 is a great solution to allow me to lock all of them to the house standard.
(The rubidium is in the chunky aluminium chassis underneath the 8140)
Another benefit of the 8140 is that many modern pieces of equipment (such as my [HP/Agilent/]Keysight oscilloscope) have a single connector for reference frequency in/out, and should the external frequency ever go away it will switch back to its internal reference, but also send that back out the connector, which could lead to other devices sharing the same signal switching to it. The easy way to avoid that is to use a dedicated port from a distribution amplifier for each device like this, which works well enough until you have this situation in multiple locations.
As previously mentioned the 8140 system uses pods to add outputs, while these pods are still available quite cheaply used on eBay (as of this writing, for as low as US$8, but ~US$25/pod has been common for a while), recently the cost of shipping to Australia has gone up to the point I started to plan making my own.
By making my own pods I also get to add features that the original pods didn't have, I started with a quad-output pod with optional internal line termination. This allows me to have feeds for multiple devices with the annoying behaviour I mentioned earlier. The enclosure is a Pomona model 4656
, with the board designed to slot in, and offer pads for the BNC pins to solder to for easy assembly.
This pod uses a Linear Technologies (now Analog Devices) LTC6957
buffer for the input stage replacing a discrete transistor & logic gate combined input stage in the original devices. The most notable change is that this stage works reliably down to -30dBm input (possibly further, couldn't test beyond that), whereas the original pods stop working right around -20dBm.
As it turns out, although it can handle lower input signal levels, in other ways including power usage it seems very similar. One notable downside is the chip tops out at 4v absolute maximum input, so a separate regulator is used just to feed this chip. The main regulator has also been changed from a 7805 to an LD1117 variant.
On this version the output stage is the same TI 74S140 dual 4-input NAND
gate as was used on the original pods, just in SOIC form factor.
As with the next board there is one error on the board, the wire loop that forms the ground connection was intended to fit a U-type pin header, however the footprint I used on the boards was just
too tight to allow the pins through, so I've used some thin bus wire instead.
The second major variant I designed was a combo version, allowing sine & square outputs by just switching a jumper, or isolated or line-regenerator (8040TA from Spectracom) versions with a simple sub-board containing just an inductor (TA) or 1:1 transformer (isolated).
This is the second revision of that board, where the 74S140 has been replaced by a modern TI 74LVC1G17 buffer
. This version of the pod, set for sine output, uses almost exactly 30mA of power (since both the old & new pods use linear supplies that's the most sensible unit), whereas the original pods are right around 33mA. The empty pods at the bottom-left are simply placeholders for 2 100 ohm resistors to add 50 ohm line termination if desired.
The board fits into the Pomona 2390
"Size A" enclosures, or for the isolated version the Pomona 3239
"Size B". This is the reason the BNC connectors have to be extended to reach the board, on the isolated boxes the BNC pins reach much deeper into the enclosure.
If the jumpers were removed, plus the smaller buffer it should be easy to fit a pod into the Pomona "Miniature" boxes
I was also due to create some new personal businesscards, so I arranged the circuit down to a single layer (the only jumper is the requirement to connect both ground pins on the connectors) and merged it with some text converted to KiCad footprints to make a nice card on some 0.6mm PCBs. The paper on that photo is covering the link to the build instructions, which weren't written at the time (they're *mostly* done now, I may update this post with the link later).
Finally, while I was out travelling at the start of April my new (to me) HP 4395A arrived so I've finally got some spectrum output. The output is very similar between the original and my version, with the major notable difference being that my version is 10dB worse at the third harmonic. I lack the equipment (and understanding) to properly measure phase noise, but if anyone in AU/NZ wants to volunteer their time & equipment for an afternoon I'd love an excuse for a field trip.
Spectrum with input sourced from my house rubidium (natively a 5MHz unit) via my 8140 line. Note that despite saying "ExtRef" the analyzer is synced to its internal 10811 (which is an optional unit, and uses an external jumper, hence the display note.
Spectrum with input sourced from the analyzer's own 10811, and power from the DC bias generator also from the analyzer.
1: Or at least I didn't think they had, I've since found out that there was a multi output pod, and one is currently in the post heading to me.
2: An option on the standard Spectracom pods, albeit a rare one.
Several days ago, inspired in part by an old work mail thread being resurrected I sent this image as a tweet captioned "The state of BGP transport security."
The context of the image for those not familiar with it is this image about noSQL databases
This triggered a bunch of discussion, with multiple people saying "so how would *you* do it", and we'll get to that (or for the TL;DR skip to the bottom), but first some background.
The tweet is a reference to the BGP protocol the Internet uses to exchange routing data between (and within) networks. This protocol (unless inside a separate container) is never encrypted, and can only be authenticated (in practice) by a TCP option known as TCP-MD5 (standardised in RFC2385). The BGP protocol itself has no native encryption or authentication support. Since routing data can often be inferred by the packets going across a link anyway, this has lead to this not being a priority to fix.
Transport authentication & encryption is a distinct issue from validation of the routing data transported by BGP, an area already being worked on by the various RPKI projects, eventually transport authentication may be able to benefit from some of the work done by those groups.
TCP-MD5 is quite limited, and while generally supported by all major BGP implementations it has one major limitation that makes it particularly annoying, in that it takes a single key, making key migration difficult (and in many otherwise sensible topologies, impossible without impact). Being a TCP option is also a pain, increasing fragility.
At the time of its introduction TCP-MD5 gave two main benefits the first was to have some basic authentication beyond the basic protocol (for which the closest element in the protocol is the validation of peer-as in the OPEN message, and a mismatch will helpfully tell you who the far side was looking for), plus making it harder to interfere with the TCP session, which on many of the TCP implementations of the day was easier than it should have been. Time, however has marched on, and protection against session interference from non-MITM is no longer needed, the major silent MITM case of Internet Exchanges using hubs is long obsolete, plus, in part due to the pain associated in changing keys many networks have a "default" key they will use when turning up a peering session, these keys are often so well known for major networks that they've often been shared on public mailing lists, eliminating what little security benefit TCP-MD5 still brings.
This has been known to be a problem for many years, and the proposed replacement TCP-AO (The TCP Authentication Option) was standardised in 2010 as RFC5925, however, to the best of my knowledge eight years later no mainstream BGP implementation supports it, and as it too is a TCP option, not only does it still has many of the downsides of MD5, but major OS kernels are much less likely to implement new ones (indeed, an OS TCP maintainer commenting such on the thread I mentioned above is what kicked off my thinking).
TCP, the wire format, is in practice unchangeable. This is one of the major reasons for QUIC, the TCP replacement protocol soon to be standardised as HTTP/3, so for universal deployment any proposal that requires changes to TCP is a non-starter.
Any solution must be implementable & deployable.
- Implementable - BGP implementations must be able to implement it, and do so correctly, ideally with a minimum of effort.
- Deployable - Networks need to be able to deploy it, when authentication issues occur error messages should be no worse than with standard BGP (this is an area most TCP-MD5 implementations fail at, of those I've used JunOS is a notable exception, Linux required kernel changes for it to even be *possible* to debug)
Ideally any security-critical code should already exist in a standardised form, with multiple widely-used implementations.
Fortunately for us, that exists in the form of TLS. IPSEC, while it exists, fails the deployable tests, as almost anyone who's ever had the misfortune of trying to get IPSEC working between different organisations using different implementations can attest, sure it can usually be made to work, but nowhere near as easily as TLS.
Discussions about the use of TLS for this purpose have happened before, but always quickly run into the problem of how certificates for this should be managed, and that is still an open problem, potentially the work on RPKI may eventually provide a solution here, but until that time we have a workable alternative in the form of TLS-PSK (first standardised in RFC4279), a set of TLS modes that allow the use of pre-shared keys instead of certificates (for those wondering, not only does this still exist in TLS1.3 it's in a better form). For a variety of reasons, not the least the lack of real-time clocks in many routers that may not be able to reach an NTP server until BGP is established, PSK modes are still more deployable than certificate verification today. One key benefit for TLS-PSK is it supports multiple keys to allow migration to a new key in a significantly less impactful manner.
The most obvious way to support BGP-in-TLS would simply be to listen on a new port (as is done for HTTPS for example), however there's a few reasons why I don't think such a method is deployable for BGP, primarily due to the need to update control-plane ACLs, a task that in large networks is often distant from the folk working with BGP, and in small networks may not be understood by any current staff (a situation not dissimilar to the state of TCP). Another option would simply be to use protocol multiplexing and do a TLS negotiation if a TLS hello is received, or unencrypted BGP for a BGP OPEN, this would violate the general "principal of least astonishment", and would be harder for operators to debug.
Instead I propose a design similar to that used by SMTP (where it is known as STARTTLS), during early protocol negotiation support for TLS is signalled using a zero-length capability in the BGP OPEN, the endpoints do a TLS negotiation, and then the base protocol continues inside the new TLS tunnel. Since this negotiation happens during the BGP OPEN, it does mean that other data included in the OPEN leaks. Primarily this is the ASN, but also the various other capabilities supported by the implementation (which could identify the implementation), I suggest that if TLS is required information in the initial OPEN not be validated, and standard reserved ASN be sent instead, and any other capabilities not strictly required not sent, with a fresh OPEN containing all normal information sent inside the TLS session.
Migration from TCP-MD5 is key point, however not one I can find any good answer for. Some implementations already allow TCP-MD5 to be optional, and that would allow an easy upgrade, however such support is rare, and unlikely to be more widely supported.
On that topic, allowing TLS to be optional in a consistent manner is particularly helpful, and something that I believe SHOULD be supported to allow cases like currently unauthenticated public peering sessions to be migrated to TLS with minimal impact. Allowing this does open the possibility of a downgrade attack, and make more plausible attacks causing state machine confusions (implementation believes it's talking on a TLS-secured session when it isn't).
What do we lose from TCP-MD5? Some performance, whilst this is not likely to be an issue for most situations, it is likely not an option for anyone still trying to run IPv4 full tables on a Cisco Catalyst 6500 with Sup720. We do also lose the TCP manipulation prevention aspects, however these days those are (IMO) of minimal value in practice. There's also the costs of implementations needing to include a TLS implementations, and whilst nearly every system will have one (at the very least for SSH) it may not already be linked to the routing protocol implementation.
Lastly, my apologies to anyone who has proposed this before, but my neither I nor my early reviewers were aware of such a proposal. Should such a proposal already exist, meeting the goals of implementable & deployable it may be sensible to pick that up instead.
The IETF is often said to work on "rough consensus and running code", for this proposal here's what I believe a minimal *actual* demonstration of consensus with code would be:
- Two BGP implementations, not derived from the same source.
- Using two TLS implementations, not derived from the same source.
- Running on two kernels (at the very least, Linux & FreeBSD)
The TL;DR version:
- Using a zero-length BGP capability in the BGP OPEN message implementations advertise their support for TLS
- TLS version MUST be at least 1.3
- If TLS is required, the AS field in the OPEN MAY be set to a well-known value to prevent information leakage, and other capabilities MAY be removed, however implementations MUST NOT require the TLS capability be the first, last or only capability in the OPEN
- If TLS is optional, which MUST NOT be default behaviour), the OPEN MUST be (other than the capability) be the same as a session configured for no encryption
- After the TCP client receives a conformation of TLS support from the TCP server's OPEN message, a TLS handshake begins
- To make this deployable TLS-PSK MUST be supported, although exact configuration is TBD.
- Authentication-only variants of TLS (ex RFC4785) REALLY SHOULD NOT be supported.
- Standard certificate-based verification MAY be supported, and if supported MUST validate use client certificates, validating both. However, how roots of trust would work for this has not been investigated.
- Once the TCP handshake completes the BGP state starts over with the client sending a new OPEN
- Signalling the TLS capability in this OPEN is invalid and MUST be rejected
- (From here, everything is unchanged from normal BGP)
Magic numbers for development:
- Capability: (to be referred to as EXPERIMENTAL-STARTTLS) 219
- ASN (for avoiding data leaks in OPEN messages): 123456
- Yes this means also sending 4-byte capability. Every implementation that might possibly implement this already supports 4-byte ASNs.
The key words "MUST (BUT WE KNOW YOU WON'T)", "SHOULD CONSIDER", "REALLY SHOULD NOT", "OUGHT TO", "WOULD PROBABLY", "MAY WISH TO", "COULD", "POSSIBLE", and "MIGHT" in this document are to be interpreted as described in RFC 6919.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC2119.
As part of my previously mentioned side project the ability to replace crystal oscillators in a circuit with a higher quality frequency reference is really handy, to let me eliminate a bunch of uncertainty from some test setups.
A simple function generator is the classic way to handle this, although if you need square wave output it quickly gets hard to find options, with arbitrary waveform generators (essentially just DACs) the common option. If you can get away with just sine wave output an RF synthesizer is the other main option.
While researching these I discovered the CG635 Clock Generator from Stanford Research
, and some time later picked one of these up used.
As well as being a nice square wave generator at arbitrary voltages these also have another set of outputs on the rear of the unit on an 8p8c (RJ45) connector, in both RS422 (for lower frequencies) and LVDS (full range) formats, as well as some power rails to allow a variety of less common output formats.
All I needed was 1.8v LVCMOS output, and could get that from the front panel output, but I'd then need a coax tail on my boards, as well as potentially running into voltage rail issues so I wanted to use the pod output instead. Unfortunately none of the pods available from Stanford Research do LVCMOS output, so I'd have to make my own, which I did.
The key chip in my custom pod is the TI SN65LVDS4
, a 1.8v capable single channel LVDS reciever that operates at the frequencies I need. The only downside is this chip is only available in a single form factor, a 1.5mm x 2mm 10 pin UQFN, which is far too small to hand solder with an iron. The rest of the circuit is just some LED indicators to signal status.
Here's a rendering of the board from KiCad.
Normally "not hand solderable" for me has meant getting the board assembled, however my normal assembly house doesn't offer custom PCB finishes, and I wanted these to have white solder mask with black silkscreen as a nice UX when I go to use them, so instead I decided to try my hand at skillet reflow as it's a nice option given the space I've got in my tiny apartment (the classic tutorial on this from SparkFun
is a good read if you're interested). Instead of just a simple plate used for cooking you can now buy hot plates with what are essentially just soldering iron temperature controllers, sold as pre-heaters making it easier to come close to a normal soldering profile.
Sadly, actually acquiring the hot plate turned into a bit of a mess, the first one I ordered in May never turned up, and it wasn't until mid-July that one arrived from a different supplier.
Because of the aforementioned lack of space instead of using stencils I simply hand-applied (leaded) paste, without even an assist tool (which I probably will acquire for next time), then hand-mounted the components, and dropped them on the plate to reflow. I had one resistor turn 90 degrees, and a few bridges from excessive paste, but for a first attempt I was really happy.
Here's a photo of the first two just after being taken off the hot plate.
Once the reflow was complete it was time to start testing, and this was where I ran into my biggest problems.
The two big problems were with the power supply I was using, and with my oscilloscope.
The power supply (A Keithley 228 Voltage/Current source) is from the 80's (Keithley's "BROWN" era), and while it has nice specs, doesn't have the most obvious UI. Somehow I'd set it to limit at 0ma output current, and if you're not looking at the segment lights it's easy to miss. At work I have an EEZ H24005
which also resets the current limit to zero on clear, however it's much more obvious when limiting, and a power supply with that level of UX is now on my "to buy" list.
The issues with my scope were much simpler. Currently I only have an old Rigol DS1052E scope, and while it works fine it is a bit of a pain to use, but ultimately I made a very simple mistake while testing. I was feeding in a trigger signal direct from the CG635's front outputs, and couldn't figure out why the generator was putting out such a high voltage (implausibly so). To cut the story short, I'd simply forgotten that the scope was set for use with 10x probes, and once I realised that everything made rather more sense. An oscilloscope with auto-detection for 10x probes, as well as a bunch of other features I want in a scope (much bigger screen for one), has now been ordered, but won't arrive for a while yet.
Ultimately the boards work fine, but until the new scope arrives I can't determine signal quality of them, but at least they're ready for when I'll need them, which is great for flow.
For the next part of my ongoing project I needed to test the GPS reciever I'm using, a uBlox LEA-M8F (M8 series chip, LEA form factor, and with frequency outputs). Since the native 30.72MHz oscillator is useless for me I'm using an external TCVCXO (temperature compensated, voltage controlled oscillator) for now, with the DAC & reference needed to discipline the oscillator based on GPS. If uBlox would sell me the frequency version of the chip on its own that would be ideal, but they don't sell to small customers.
Here's a (rather modified) board sitting on top of an Efratom FRK rubidium standard
that I'm going to mount to make a (temporary) home standard (that deserves a post of its own). To give a sense of scale the silver connector at the top of the board is a micro-USB socket.
Although a very simple board I had a mess of problems once again, both in construction and in component selection.
Unlike the PoE board
from the previous post I didn't have this board manufactured. This was for two main reasons, first, the uBlox module isn't available from Digikey, so I'd still need to mount it by hand. The second, to fit all the components this board has a much greater area, and since the assembly house I use charges by board area (regardless of the number or density of components) this would have cost several hundred dollars. In the end, this might actually have been the sensible way to go.
By chance I'd picked up a new soldering iron at the same time these boards arrived, a Hakko FX-951 knock-off and gave it a try. Whilst probably an improvement over my old Hakko FX-888 it's not a great iron, especially with the knife tip it came with, and certainly nowhere near as nice to use as the JBC CD-B (I think that's the model) we have in the office lab. It is good enough that I'm probably going to buy a genuine Hakko FM-203 with an FM-2032 precision tool for the second port.
The big problem I had hand-soldering the boards was bridges on several of the components. Not just the tiny (0.65mm pitch, actually the *second largest* of eight packages for that chip) SC70 footprint of the PPS buffer, but also the much more generous 1.1mm pitch of the uBlox module. Luckily solder wick fixed most cases, plus one where I pulled the buffer and soldered a new one more carefully.
With components, once again I made several errors:
- I ended up buying the wrong USB connectors for the footprint I chose (the same thing happened with the first run of USB-C modules I did in 2016), and while I could bodge them into use easily enough there wasn't enough mechanical retention so I ended up ripping one connector off the board. I ordered some correct ones, but because I wasn't able to wick all solder off the pads they don't attach as strongly as they should, and whilst less fragile, are hardly what I'd call solid.
- The surface mount GPS antenna (Taoglas AP.10H.01 visible in this tweet) I used was 11dB higher gain than the antenna I'd tested with the devkit, I never managed to get it to lock while connected to the board, although once on a cable it did work ok. To allow easier testing, in the end I removed the antenna and bodged on an SMA connector for easy testing.
- When selecting the buffer I accidentally chose one with an open-drain output, I'd meant to use one with a push-pull output. This took quite a silly long time for me to realise what mistake I'd made. Compounding this, the buffer is on the 1PPS line, which only strobes while locked to GPS, however my apartment is a concrete box, with what GPS signal I can get inside only available in my bedroom, and my oscilloscope is in my lab, so I couldn't demonstrate the issue live, and had to inject test signals. Luckily a push-pull is available in the same footprint, and a quick hot-air aided swap later (once parts arrived from Digikey) it was fixed.
- Yes I can solder down to ~0.5mm pitch, but not reliably.
- More test points on dev boards, particularly all voltage rails, and notable signals not otherwise exposed.
- Flux is magic, you probably aren't using enough.
Although I've confirmed all basic functions of the board work, including GPS locking, PPS (quick video of the PPS signal LED
), and frequency output, I've still not yet tested the native serial ports and frequency stability from the oscillator. Living in an urban canyon makes such testing a pain.
Eventually I might also test moving the oscillator, DAC & reference into a mini oven to see if a custom OCXO would be any better, if small & well insulated enough the power cost of an oven shouldn't be a problem.
Also as you'll see if you look at the tweets, I really should have posted this almost a month ago, however I finished fixing the board just before heading off to California for a work trip, and whilst I meant to write this post during the trip, it's not until I've been back for more than a week that I've gotten to it. I find it extremely easy to let myself be distracted from side projects, particularly since I'm in a busy period at $ORK at the moment.
For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.
Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng
who I'd used for the USB-C termination boards I did a while back.
Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.
One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.
For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
- I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
- Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
- The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
- While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.
Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).
Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's linux.conf.au Open Hardware miniconf
Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).
Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards
to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).
If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.
With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D
) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.
Here's a photo of it running from a Chromebook Pixel charger:
As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.
I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).
At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.
One of the less-touted features of USB-C is USB-PD, which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.
Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).
After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.Devkits
were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.
It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.
By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG
, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.
Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.
I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.
At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.
Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie
to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.
Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.
Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.
last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD
, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.
Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.
- April 21st 2016, decide to start working on the project
- April 28th, devkits arrive
- May 8th, schematic largely complete, work starts on PCB layout
- June 14th, order sent to CM
- ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
- early August, boards arrive from California
- Somewhere here I try, and fail, to reflow a modified connector onto a board
- October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
- January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
- January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
- January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.
1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.
One thing people might be surprised about my job is that although I'm in network operations I write a lot of code, mostly for tools we use to monitor and maintain the network, but I've also had a few features and bug-fixes get into Google-wide (well, eng-wide) internal tools. This has lead me to use a bunch of languages, quite probably more than most Google software engineers.
Languages I've used in the last three months:
In the two years since I started there's also:
- SLAX (An alernate syntax version of XSLT)
These end up being a fairly unsurprising mix of standard sysadmin, web and systems programmer faire, with the real outliers being Go, the new c-ish systems language created at Google (several of the people working on the language sit just on the other side of a wall from me), and tcsh & SLAX which come from working with Juniper's JunOS which is built on FreeBSD with an XML-based configuration.
First of all, yes I'm a general Greens / Labour supporter so it's not surprising that I like the NBN. I work in a tech field (specifically datacomms) that also generally supports the NBN. I have plenty of reasons to support the NBN from those, but here's the "how it affects me" ones.
I live in inner-city Sydney, specifically Ultimo, literally in the afternoon shadow of (probably) the most wired building in the country, the GlobalSwitch datacenter, yet on a bad day I get 1Mb or so through my Internet connection.
I do have two basic options for a home Internet connection where I live, both through Telstra's infrastructure, either their HFC cable network, or classic copper. As Telstra (last time I tried) were unable to sell a real business-class service on HFC I use ADSL, and since I consider IPv6 support a requirement, I use Internode (there's other reasons I use them as well, but IPv6 sadly is still hard to get from other providers). Due to the location of my apartment 3/4g services aren't an option even if they were fast enough (or affordable).
Sadly the copper in Ultimo is in a sad state with high cross-talk and attenuation. I suspect this is likely due to passing underground through Wentworth Park which is only a meter or two above sea level so water damage is highly likely.
Even on a good day I only barely sustain ~8Mb down, ~1Mb up, with no disconnects, on a bad day it can be as low as 1Mb down, with multiple disconnects per hour.
It's the last point I particularly care about, regular disconnects make the service unreliable and are the aspect that is most irritating.
Speed, although people often want it is of limited value to me, once I can do ~20Mb down and ~5Mb up that's enough for me, handles file uploads fast enough and offers a nice buffer for video conferencing (yes I really do join video conferences from home). I suspect I will subscribe to a 50/20 plan if/when NBN is available in my area. In theory NBNco should have commenced construction in the Pyrmont/Glebe/Ultimo/Haymarket area this month (per their map), but I'm not aware of anyone who has been contacted to confirm this.
Because I'm in an apartment complex it's likely that the coalition's plan would have a node in the basement (there's already powered HFC amps in the basement car parks so it wouldn't be unprecedented), this matches some versions of the NBN plan, although the current plan (last I saw) is fibre to each apartment. Once a node is within a building cable damage due to water is unlikely and I'd probably be able to sustain a decent service, but if I ever moved to a townhouse then I'd be back to potentially dodgy underground cable.
I don't believe forcing Telstra & Optus to convert their HFC networks into wholesale capable networks is a sensible idea for a variety of reasons, the two major being would they still be able to send TV, and the need to split the HFC segments into much smaller sections to be able to sustain throughput. Even with the current low take-up rate of cable Internet I know many people in Melbourne who find their usable throughput massively degrades at peak times, something that would almost certainly get worse, not better, if HFC became the sole method of high speed Internet in a region.
I also still maintain my server and Internet service at my old place in Melbourne, and will get at least 50/20 there, should it ever be launched, when I first got ADSL2+ service I managed to sustain 24/2 but now only 12/1, presumably due to degrading lines, which makes me wonder about how fast and reliable a VDSL based FTTN would actually be.
1: The only real alternate path would add a kilometre or more of cable to avoid that low lying area.
Over the last few months my (now retired) Lenovo T410 had been starting to show its age. I received it in July 2010 making it roughly 21 months old at the time of its retirement, by this point I'd replaced the speakers, and had a spare screen waiting for it for when the original one finally went (I didn't end up swapping it simply because by the time it was getting bad enough I already had the T430 ordered). Other things getting dodgy were the two left mouse buttons wearing out, and the screen lid sensor getting dodgy. I was also starting to be pressured by the 8GB RAM limitation, and had also purchased 16GB of RAM in the hope that the T410 could actually use it, but no such luck.
I'd thought I might manage to give two generations a miss instead of my normal one, but having seen a preview of the T431 on Engadget
I decided to order a T430 anyway as Lenovo are removing the physical mouse buttons on the next gen. I wasn't a fan of them switching to a chiclet keyboard, but the mouse buttons would have been a line too far.
I chose the T430 over the X1 Carbon simply as I wanted support for 16GB of RAM, and over the T430s because I didn't think the weight savings would be worth the extra few hundred dollars (I also thought the T430s had a 7.5mm drive slot, but the T430 had a 9.5mm, I was wrong here)
Ordered top of the line except for Intel graphics and base RAM & HDD, shipped to the Googleplex it cost me less then AU$1000, including paying CA sales tax. One of my coworkers was able to bring it back with him on a visit to our Sydney office saving having to drop-ship it as I did back in 2007 with my T61. The only reason I went this method was that buying it from Lenovo direct in Australia would have been over 50% more expensive. (My T410 was actually purchased in Australia, at a price that was competitive with the US, demonstrating that they can actually do that)
Major differences with the T410:
- No firewire - I never actually used the firewire on the T410, and my few bits of media kit with firewire get much more use on my MacBook anyway
- No modem - Finally! It's been years since I used the a dial-up modem on a laptop.
- USB3 - Might be useful
- Smartcard reader - Ordered this option under the theory I might migrate my GPG key over to it
- Chiclet keyboard - Surprisingly decent, not quite as good as the old design, but I think I'll be happy enough with it
- 1600x900 screen - This I think is a serious downgrade, I wish they still had a 16:10 option, the extra width ranges from mostly useless to making some apps harder to use maximised.
Sadly I couldn't order the backlit keyboard (oddly this one option was available in the Australian store), but I may well order the part and do the retrofit.
The weight is slightly better, as is is the size, but neither are noticeable unless you have a T410 at hand to compare it with.
As with my previous upgrade from T61 to T410 I'd simply planned to pop out the drive carrier from old into new and migrate with no effort, sadly while getting the drive out of the T410 was easy I'd missed noticing that the T430 takes a 7.5mm drive, not the old standard of 9.5mm. Sadly the SSD I'd only recently upgraded to was 9.5mm, but as it had a plastic (top) case I was fairly sure that some quick destruction would solve that. Sure enough after getting the top case off I was able to hot-glue the PCB onto the metal bottom case and screw it into the new carrier, and booted straight into Debian with no issues.
The only teething issue I had was the left mouse button on the trackpad would release if I tried to perform a drag, I found some threads claiming this was a design choice and the setting to change in the Windows driver to disable this, but could find no equivalent in the Xorg synaptics driver. Oddly however, this simply started working the next morning. I've no idea what the deal is here.
On the topic of shiny new laptops I got to try the new Chromebook Pixel on Friday evening, and first impressions are that it's by far the nicest laptop I've seen. Although most of the attention has been placed on the screen, which I agree is nice, I found both the keyboard and speakers more impressive, with the speakers being the best I've heard on a laptop since 2002 (That was a Toshiba Tecra, one I later used in 2004-2005). Sadly I can't swap my work T410 as I require a (standard) Linux laptop for a few reasons (that one will be upgraded to an X1 Carbon soon) but I may well buy one anyway.