As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.
I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).
At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.
One of the less-touted features of USB-C is USB-PD, which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.
Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).
After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.Devkits
were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.
It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.
By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG
, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.
Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.
I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.
At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.
Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie
to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.
Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.
Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.
last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD
, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.
Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.
- April 21st 2016, decide to start working on the project
- April 28th, devkits arrive
- May 8th, schematic largely complete, work starts on PCB layout
- June 14th, order sent to CM
- ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
- early August, boards arrive from California
- Somewhere here I try, and fail, to reflow a modified connector onto a board
- October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
- January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
- January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
- January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.
1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.
One thing people might be surprised about my job is that although I'm in network operations I write a lot of code, mostly for tools we use to monitor and maintain the network, but I've also had a few features and bug-fixes get into Google-wide (well, eng-wide) internal tools. This has lead me to use a bunch of languages, quite probably more than most Google software engineers.
Languages I've used in the last three months:
In the two years since I started there's also:
- SLAX (An alernate syntax version of XSLT)
These end up being a fairly unsurprising mix of standard sysadmin, web and systems programmer faire, with the real outliers being Go, the new c-ish systems language created at Google (several of the people working on the language sit just on the other side of a wall from me), and tcsh & SLAX which come from working with Juniper's JunOS which is built on FreeBSD with an XML-based configuration.
First of all, yes I'm a general Greens / Labour supporter so it's not surprising that I like the NBN. I work in a tech field (specifically datacomms) that also generally supports the NBN. I have plenty of reasons to support the NBN from those, but here's the "how it affects me" ones.
I live in inner-city Sydney, specifically Ultimo, literally in the afternoon shadow of (probably) the most wired building in the country, the GlobalSwitch datacenter, yet on a bad day I get 1Mb or so through my Internet connection.
I do have two basic options for a home Internet connection where I live, both through Telstra's infrastructure, either their HFC cable network, or classic copper. As Telstra (last time I tried) were unable to sell a real business-class service on HFC I use ADSL, and since I consider IPv6 support a requirement, I use Internode (there's other reasons I use them as well, but IPv6 sadly is still hard to get from other providers). Due to the location of my apartment 3/4g services aren't an option even if they were fast enough (or affordable).
Sadly the copper in Ultimo is in a sad state with high cross-talk and attenuation. I suspect this is likely due to passing underground through Wentworth Park which is only a meter or two above sea level so water damage is highly likely.
Even on a good day I only barely sustain ~8Mb down, ~1Mb up, with no disconnects, on a bad day it can be as low as 1Mb down, with multiple disconnects per hour.
It's the last point I particularly care about, regular disconnects make the service unreliable and are the aspect that is most irritating.
Speed, although people often want it is of limited value to me, once I can do ~20Mb down and ~5Mb up that's enough for me, handles file uploads fast enough and offers a nice buffer for video conferencing (yes I really do join video conferences from home). I suspect I will subscribe to a 50/20 plan if/when NBN is available in my area. In theory NBNco should have commenced construction in the Pyrmont/Glebe/Ultimo/Haymarket area this month (per their map), but I'm not aware of anyone who has been contacted to confirm this.
Because I'm in an apartment complex it's likely that the coalition's plan would have a node in the basement (there's already powered HFC amps in the basement car parks so it wouldn't be unprecedented), this matches some versions of the NBN plan, although the current plan (last I saw) is fibre to each apartment. Once a node is within a building cable damage due to water is unlikely and I'd probably be able to sustain a decent service, but if I ever moved to a townhouse then I'd be back to potentially dodgy underground cable.
I don't believe forcing Telstra & Optus to convert their HFC networks into wholesale capable networks is a sensible idea for a variety of reasons, the two major being would they still be able to send TV, and the need to split the HFC segments into much smaller sections to be able to sustain throughput. Even with the current low take-up rate of cable Internet I know many people in Melbourne who find their usable throughput massively degrades at peak times, something that would almost certainly get worse, not better, if HFC became the sole method of high speed Internet in a region.
I also still maintain my server and Internet service at my old place in Melbourne, and will get at least 50/20 there, should it ever be launched, when I first got ADSL2+ service I managed to sustain 24/2 but now only 12/1, presumably due to degrading lines, which makes me wonder about how fast and reliable a VDSL based FTTN would actually be.
1: The only real alternate path would add a kilometre or more of cable to avoid that low lying area.
Over the last few months my (now retired) Lenovo T410 had been starting to show its age. I received it in July 2010 making it roughly 21 months old at the time of its retirement, by this point I'd replaced the speakers, and had a spare screen waiting for it for when the original one finally went (I didn't end up swapping it simply because by the time it was getting bad enough I already had the T430 ordered). Other things getting dodgy were the two left mouse buttons wearing out, and the screen lid sensor getting dodgy. I was also starting to be pressured by the 8GB RAM limitation, and had also purchased 16GB of RAM in the hope that the T410 could actually use it, but no such luck.
I'd thought I might manage to give two generations a miss instead of my normal one, but having seen a preview of the T431 on Engadget
I decided to order a T430 anyway as Lenovo are removing the physical mouse buttons on the next gen. I wasn't a fan of them switching to a chiclet keyboard, but the mouse buttons would have been a line too far.
I chose the T430 over the X1 Carbon simply as I wanted support for 16GB of RAM, and over the T430s because I didn't think the weight savings would be worth the extra few hundred dollars (I also thought the T430s had a 7.5mm drive slot, but the T430 had a 9.5mm, I was wrong here)
Ordered top of the line except for Intel graphics and base RAM & HDD, shipped to the Googleplex it cost me less then AU$1000, including paying CA sales tax. One of my coworkers was able to bring it back with him on a visit to our Sydney office saving having to drop-ship it as I did back in 2007 with my T61. The only reason I went this method was that buying it from Lenovo direct in Australia would have been over 50% more expensive. (My T410 was actually purchased in Australia, at a price that was competitive with the US, demonstrating that they can actually do that)
Major differences with the T410:
- No firewire - I never actually used the firewire on the T410, and my few bits of media kit with firewire get much more use on my MacBook anyway
- No modem - Finally! It's been years since I used the a dial-up modem on a laptop.
- USB3 - Might be useful
- Smartcard reader - Ordered this option under the theory I might migrate my GPG key over to it
- Chiclet keyboard - Surprisingly decent, not quite as good as the old design, but I think I'll be happy enough with it
- 1600x900 screen - This I think is a serious downgrade, I wish they still had a 16:10 option, the extra width ranges from mostly useless to making some apps harder to use maximised.
Sadly I couldn't order the backlit keyboard (oddly this one option was available in the Australian store), but I may well order the part and do the retrofit.
The weight is slightly better, as is is the size, but neither are noticeable unless you have a T410 at hand to compare it with.
As with my previous upgrade from T61 to T410 I'd simply planned to pop out the drive carrier from old into new and migrate with no effort, sadly while getting the drive out of the T410 was easy I'd missed noticing that the T430 takes a 7.5mm drive, not the old standard of 9.5mm. Sadly the SSD I'd only recently upgraded to was 9.5mm, but as it had a plastic (top) case I was fairly sure that some quick destruction would solve that. Sure enough after getting the top case off I was able to hot-glue the PCB onto the metal bottom case and screw it into the new carrier, and booted straight into Debian with no issues.
The only teething issue I had was the left mouse button on the trackpad would release if I tried to perform a drag, I found some threads claiming this was a design choice and the setting to change in the Windows driver to disable this, but could find no equivalent in the Xorg synaptics driver. Oddly however, this simply started working the next morning. I've no idea what the deal is here.
On the topic of shiny new laptops I got to try the new Chromebook Pixel on Friday evening, and first impressions are that it's by far the nicest laptop I've seen. Although most of the attention has been placed on the screen, which I agree is nice, I found both the keyboard and speakers more impressive, with the speakers being the best I've heard on a laptop since 2002 (That was a Toshiba Tecra, one I later used in 2004-2005). Sadly I can't swap my work T410 as I require a (standard) Linux laptop for a few reasons (that one will be upgraded to an X1 Carbon soon) but I may well buy one anyway.
I currently don't have a server in my Sydney apartment, sure I've got a Raspberry Pi doing a few minor things (Radius, syslog, NTP, DNS), but those are all for my network devices, I don't use them directly.
What I'd love to be able to buy is some form of mini-cluster, probably in the form of a 3ish RU rackmount device that has a gigabit ethernet switch as a backplane, and lets me slot in CPU blades.
Ideally I'd want two kinds of blades, the first being ARM64, with as much RAM as parts will allow (8-32GB), and two disks (allowing a mix of spinning for bulk, and SSD for speed). These would mostly run something like OpenShift
The second set of blades would be standard Intel i7, again with as much RAM as possible (32-64GB), and the same storage as the ARM's (or external) to allow VM's where I can't just run on ARM.
Obviously there'd need to be some form of self-backup, and ideally the cluster admin role could automatically failover between nodes as needed.
With something like this I could deploy it at the three sites I've got servers, allocating an IPv6 /64 to each site, which would let every instance have static addressing and just migrate between nodes.
ARM64 systems have been talked about, but I've yet to see a serious design that isn't a dense cluster, only useful if you've got hundreds of nodes.
I've put money into a lot
of Kickstarter projects, and I finally got around to doing a summary as to what I've spent:2012:
19 projects, total of US$3,295 pledged (however $194 was to projects that failed to reach target)
- 2 failed to reach target
- 6 received
- 11 are in clear progress or otherwise still seem likely to ship (several are in progress of shipping or have delivered early parts [eg, ebook before printing])
17 projects, total of US$1,943 pledged (however $10 was to projects that failed to reach target)
5 projects, total of US$1,177 pledged
It's time for another post pointing out what I think Juniper should do (Two previous posts in October 2011 covered Qfabric
and other switching
Thankfully it's no longer a secret that I use (the big) Juniper kit at work
(and yes I've logged in to some of the boxes in that photo this week). I also have a bunch of personal contacts at Juniper, but I know nothing about any unreleased products.
Qfabric has really improved, and people really are deploying it, of my previous list the things that really stick out are still the lack of MX & SRX integration, especially as we now know it's just software blocking it. For smaller sites if they offered an inbuilt 24-port switch on the control plane nodes it would remove the control plane aggregation boxes.
I'm still amazed they're yet to offer a shared external control plane (for EX, QF and perhaps MX80) and serious BGP route reflector, surely the profit made from selling M7i's and MX240's (for those with higher BGP needs) for only BGP reflection can't be that high.
While I'm on MX the MX80 is starting to look a little long in the tooth, I believe if they were to do a refresh using the chips in the MPC4 they should be able to double the forwarding throughput (I'd hope they seriously improve the control plane at the same time).
One more thing I mentioned in one of the previous entries was how I'd love a version of the EX2200-C that had 10g uplinks, I really doubt that power is the reason they don't offer it (although perhaps they wouldn't be able to do a PoE variant of that combo). In fact I've just ordered two of the PoE version to use in my home networks both in Melbourne and Sydney.
The EX8200 another year on is now looking very old, and could really use a new fabric to give it denser 10g, and offer 40g or even 100g ports for those who don't want to go down the Qfabric path.
Speaking of refresh, it might be interesting to see a device in between the SRX650 and the SRX1400, as Cavium (who make the processors used in the lower-end SRX) have some nice options now.
While on SRX it's beyond pathetic that now in 2013 they still don't offer usable IPv6 supporting CPE (even on the v4 side there's enough limitations that they still don't make a good CPE on their own).
Sadly this leads to my last point, JunOS' release managment has been going downhill for as long as I've been using their kit (since 2009), and despite several re-orgs it still doesn't seem to be getting much better. Now, despite dropping back from four to three releases a year they failed at even that, as 12.3 still hasn't been released.
Earlier in the year they were trying to hire people in the JunOS PM area, I hope they managed to find someone good.
It has very much become a tradition for me to read Cory Doctorow's latest book whilst flying to or from the US, and write a blog post. So this time, while I sit in my business class seat (points upgrade, sadly the company doesn't pay for it) here's this years edition.
Before I get to that, this year's trip was very short, I flew in last Sunday (the 11th) and am flying back on Monday the 19th (US time, get back Wed 21 AU time). Excluding the two days in the US lost to travel I've worked pretty much a full day every other day I've been here, making what's probably a 70 hour week, luckily this is a very rare event, and in this case mostly my own fault for trying to stuff as much as possible in the trip.
For this trip I decided to skip the A380 and went classic with the 747, certainly the premium product is fairly similar, although it's down on the main deck, not the upper which is a shame.
On this return journey I managed to snag a (points) upgrade to business, and return to my favourite aircraft cabin, the 747-400's upper deck. As it happens this was the ill-fated QF108 which simply circled the ocean outside LA for an hour dumping fuel before it returned to LA sending me to the LAX Hilton for a day to wait for a replacement flight back to SYD which turned out to be QF12, the A380 LAX-SYD service.
Now having tried both I think I do still prefer the upper deck on a 747 over the A380 but it really is only that sense of privacy you get with the small cabin, other then that the A380 probably wins in every way (except perhaps speed with the A380 cruising over 100km/hour slower then I've usually seen a 747, making the pacific crossing over an hour longer).
This is actually my first long-haul business class flight, I've done the LAX-JFK Qantas flight in business (in 2008 when I went to HOPE), but at only around five hours that's much shorter then the full 14(ish) hours of the full transpac flight, and although I got a business seat on a LAX-MEL flight back in 2009 that was still with premium service, something, until now, I considered fairly minimal. I do still think that for what you get business class is overpriced, but if (as I feel) it's saved me a day of recovery so I can get back to work a day earlier that can easily be worth the difference in price as you move only a level or two up in an organisations hierarchy. Just having a nice, quiet, catered lounge in LAX is worth paying some amount for, although the Internet there is only barely usable (and Qantas are *still* stealing other people's IP space for their captive portals which is a pathetic indictment of whoever runs them).
Although I've started reading "Rapture of the Nerds" I'm only about a third of the way through it, so far it seems very inspired by Anathem (et al, there's many other books in this vein) and it really isn't grabbing me like much of Cory's other work has.
1: 2011 edition: http://laptop006.livejournal.com/55566.html
, 2008 edition: http://laptop006.livejournal.com/45918.html
, including this post that's a 60% hit rate, missing only my 2009 and 2007 trips, I did write a few posts about the 2007 trip, but there's about an 18-month empty space around my 2009 trip.
(if you like the wallpaper how about the rest of my machines
This weeks project was building a Raspberry Pi laptop. Inspired by other's early attempts
and the new v2 boards that are designed to allow reverse power feed from the USB host ports (saving the need to hack up USB cables), and the cheapness of the Motorola LapDock's on eBay (mine was < $80 at the time, although they've gone up quite a bit recently as it seems other's have had the same idea.
From the back you can see how it's put together, the pi is inside a 3D printed case
(Thanks to Jan at work who has a makerbot at his desk, and responded to my my "hey would someone consider helping me with this" with "I've done a test print, come try it!"), and that case is simply attached to the outside screen portion of a Motorola LapDock (the early Atrix version) which I purchased on eBay for less then a hundred dollars shipped.
From there it's a few simple cables and adapters to use it:
- MicroHDMI female-female adapter & HDMI to microHDMI cable
- MicroUSB extension cable & USB-A to microUSB female adapter
I purchased these all from eBay as well for a total of about $10
I did remove the dock part of the lapdock to achieve a more solid design, and then used Sugru
to make the connectors and cables into one solid unit.
There are a few issues with this:
- Battery life is fairly low, I suspect swapping out the linear regulator as others have tried would help a fair bit.
- Closing the lid interrupts power, crashing the pi (and corrupting the root filesystem fairly often), adding a supercap to the pi has solved this for others.
- Power is controlled by (un)plugging the HDMI connection between the two, this is a pain, and not trivially solvable.
- Although this is a 512MB pi the firmware needed to use the other half of memory hasn't hit the Raspbian repo's yet, and I would rather keep an updatable Raspbian then get the extra ram.
- The keyboard, whilst better then the netbooks of old, is still tiny and hard for me to type on.
- The touchpad is giant, but low sensitivity to movement, but high sensitivity to tapping, this is probably fixable in config.
- Need to find a good pi-compatible wifi adapter.
Ultimately while this is a nice project to use as a platform to demo the pi, if you actually wanted a decent ARM netbook/laptop I'd just go buy one of the new Chromebook's, which already have docs up on how to run your ARM-capable distro of choice.
Today several people linked to a piece
on the "Transition Darebin" website. I think it's worth pointing out that it appears that this group doesn't actually seem to be associated with the Darebin council, only composed of members mainly from that region.
It says a bunch of things I, and many others have a problem with.
I'll just respond to a few points:
1. Delete unused data - Actually the energy costs of you interacting with computers to find and delete data are almost certainly far greater then the cost of storing that data indefinitely, as long as it's not media (audio, picture, video) it's most likely very small. The major reason to consider deleting data is is it starts to become painful to find data, not (in general) the amount. At "large enterprise" scales the cost to store a terabyte, in a highly available fashion is now in the low hundreds of dollars a year, that involves having multiple copies available online at all times, and several copies in multiple locations on tape. When done in the form done by most web sites it's much less. Even if I include all my (archived) received e-mail my personal data from the last decade would easily fit on a DVD.
2. "save large files that only need to be read-only as PDFs" - This is actually a good thing to do, but for a different reason. Any file that's in a proprietary format (eg, all MS Office docs, even the newer docx/xlsx/etc. ones) may become unreadable at some point in the future if the software to read it becomes unavailable. Converting the data into a simple format (plain text, html, etc), or at least a (publicly) standardised format such as PDF helps prevent data loss.
3. "Delete accounts you no longer use" - worth doing to prevent issues when accounts and/or services get hijacked, but as mentioned above, the cost of the energy used to delete the data is probably higher then the cost to store indefinitely.
But there's other things not worth doing:
1. Unplugging "phantom loads". The vast majority of phantom loads really are trivial, and are not worth your time to unplug, this is getting even better as higher efficiency standards are being introduced. As a general rule if a (small) device is not noticeably warm to the touch when unused it's not consuming a large amount of power and you shouldn't bother unplugging it unless you won't be using it for weeks on end. Taking an occasional cool shower can save much more energy (especially if you use electric hot water)
Some things that might actually be worth doing:
1. Upgrade hardware every few years (and *properly* recycle old hardware). If you heavily use your computers and other devices the efficiency gains from new hardware can easily justify themselves, for businesses this is often *trivial* yet many companies don't consider this. At my previous job replacing all the servers with fewer, current generation, high performance servers would cut more then a third off a very expensive power bill, and mostly justified the equipment on that merit alone.
2. Fix teleworking. This more then most "digital" things is an issue, and one that the NBN won't solve, companies need to think about what roles remote, or even just occasionally remote, workers can take up. The fossil fuel saved from just a fraction of people commuting less often pays for the energy costs of one *very* large network. The NBN doesn't solve this because the majority of the problem isn't technical, but the policies and procedures at each individual company.