One thing people might be surprised about my job is that although I'm in network operations I write a lot of code, mostly for tools we use to monitor and maintain the network, but I've also had a few features and bug-fixes get into Google-wide (well, eng-wide) internal tools. This has lead me to use a bunch of languages, quite probably more than most Google software engineers.
Languages I've used in the last three months:
In the two years since I started there's also:
- SLAX (An alernate syntax version of XSLT)
These end up being a fairly unsurprising mix of standard sysadmin, web and systems programmer faire, with the real outliers being Go, the new c-ish systems language created at Google (several of the people working on the language sit just on the other side of a wall from me), and tcsh & SLAX which come from working with Juniper's JunOS which is built on FreeBSD with an XML-based configuration.
First of all, yes I'm a general Greens / Labour supporter so it's not surprising that I like the NBN. I work in a tech field (specifically datacomms) that also generally supports the NBN. I have plenty of reasons to support the NBN from those, but here's the "how it affects me" ones.
I live in inner-city Sydney, specifically Ultimo, literally in the afternoon shadow of (probably) the most wired building in the country, the GlobalSwitch datacenter, yet on a bad day I get 1Mb or so through my Internet connection.
I do have two basic options for a home Internet connection where I live, both through Telstra's infrastructure, either their HFC cable network, or classic copper. As Telstra (last time I tried) were unable to sell a real business-class service on HFC I use ADSL, and since I consider IPv6 support a requirement, I use Internode (there's other reasons I use them as well, but IPv6 sadly is still hard to get from other providers). Due to the location of my apartment 3/4g services aren't an option even if they were fast enough (or affordable).
Sadly the copper in Ultimo is in a sad state with high cross-talk and attenuation. I suspect this is likely due to passing underground through Wentworth Park which is only a meter or two above sea level so water damage is highly likely.
Even on a good day I only barely sustain ~8Mb down, ~1Mb up, with no disconnects, on a bad day it can be as low as 1Mb down, with multiple disconnects per hour.
It's the last point I particularly care about, regular disconnects make the service unreliable and are the aspect that is most irritating.
Speed, although people often want it is of limited value to me, once I can do ~20Mb down and ~5Mb up that's enough for me, handles file uploads fast enough and offers a nice buffer for video conferencing (yes I really do join video conferences from home). I suspect I will subscribe to a 50/20 plan if/when NBN is available in my area. In theory NBNco should have commenced construction in the Pyrmont/Glebe/Ultimo/Haymarket area this month (per their map), but I'm not aware of anyone who has been contacted to confirm this.
Because I'm in an apartment complex it's likely that the coalition's plan would have a node in the basement (there's already powered HFC amps in the basement car parks so it wouldn't be unprecedented), this matches some versions of the NBN plan, although the current plan (last I saw) is fibre to each apartment. Once a node is within a building cable damage due to water is unlikely and I'd probably be able to sustain a decent service, but if I ever moved to a townhouse then I'd be back to potentially dodgy underground cable.
I don't believe forcing Telstra & Optus to convert their HFC networks into wholesale capable networks is a sensible idea for a variety of reasons, the two major being would they still be able to send TV, and the need to split the HFC segments into much smaller sections to be able to sustain throughput. Even with the current low take-up rate of cable Internet I know many people in Melbourne who find their usable throughput massively degrades at peak times, something that would almost certainly get worse, not better, if HFC became the sole method of high speed Internet in a region.
I also still maintain my server and Internet service at my old place in Melbourne, and will get at least 50/20 there, should it ever be launched, when I first got ADSL2+ service I managed to sustain 24/2 but now only 12/1, presumably due to degrading lines, which makes me wonder about how fast and reliable a VDSL based FTTN would actually be.
1: The only real alternate path would add a kilometre or more of cable to avoid that low lying area.
Over the last few months my (now retired) Lenovo T410 had been starting to show its age. I received it in July 2010 making it roughly 21 months old at the time of its retirement, by this point I'd replaced the speakers, and had a spare screen waiting for it for when the original one finally went (I didn't end up swapping it simply because by the time it was getting bad enough I already had the T430 ordered). Other things getting dodgy were the two left mouse buttons wearing out, and the screen lid sensor getting dodgy. I was also starting to be pressured by the 8GB RAM limitation, and had also purchased 16GB of RAM in the hope that the T410 could actually use it, but no such luck.
I'd thought I might manage to give two generations a miss instead of my normal one, but having seen a preview of the T431 on Engadget
I decided to order a T430 anyway as Lenovo are removing the physical mouse buttons on the next gen. I wasn't a fan of them switching to a chiclet keyboard, but the mouse buttons would have been a line too far.
I chose the T430 over the X1 Carbon simply as I wanted support for 16GB of RAM, and over the T430s because I didn't think the weight savings would be worth the extra few hundred dollars (I also thought the T430s had a 7.5mm drive slot, but the T430 had a 9.5mm, I was wrong here)
Ordered top of the line except for Intel graphics and base RAM & HDD, shipped to the Googleplex it cost me less then AU$1000, including paying CA sales tax. One of my coworkers was able to bring it back with him on a visit to our Sydney office saving having to drop-ship it as I did back in 2007 with my T61. The only reason I went this method was that buying it from Lenovo direct in Australia would have been over 50% more expensive. (My T410 was actually purchased in Australia, at a price that was competitive with the US, demonstrating that they can actually do that)
Major differences with the T410:
- No firewire - I never actually used the firewire on the T410, and my few bits of media kit with firewire get much more use on my MacBook anyway
- No modem - Finally! It's been years since I used the a dial-up modem on a laptop.
- USB3 - Might be useful
- Smartcard reader - Ordered this option under the theory I might migrate my GPG key over to it
- Chiclet keyboard - Surprisingly decent, not quite as good as the old design, but I think I'll be happy enough with it
- 1600x900 screen - This I think is a serious downgrade, I wish they still had a 16:10 option, the extra width ranges from mostly useless to making some apps harder to use maximised.
Sadly I couldn't order the backlit keyboard (oddly this one option was available in the Australian store), but I may well order the part and do the retrofit.
The weight is slightly better, as is is the size, but neither are noticeable unless you have a T410 at hand to compare it with.
As with my previous upgrade from T61 to T410 I'd simply planned to pop out the drive carrier from old into new and migrate with no effort, sadly while getting the drive out of the T410 was easy I'd missed noticing that the T430 takes a 7.5mm drive, not the old standard of 9.5mm. Sadly the SSD I'd only recently upgraded to was 9.5mm, but as it had a plastic (top) case I was fairly sure that some quick destruction would solve that. Sure enough after getting the top case off I was able to hot-glue the PCB onto the metal bottom case and screw it into the new carrier, and booted straight into Debian with no issues.
The only teething issue I had was the left mouse button on the trackpad would release if I tried to perform a drag, I found some threads claiming this was a design choice and the setting to change in the Windows driver to disable this, but could find no equivalent in the Xorg synaptics driver. Oddly however, this simply started working the next morning. I've no idea what the deal is here.
On the topic of shiny new laptops I got to try the new Chromebook Pixel on Friday evening, and first impressions are that it's by far the nicest laptop I've seen. Although most of the attention has been placed on the screen, which I agree is nice, I found both the keyboard and speakers more impressive, with the speakers being the best I've heard on a laptop since 2002 (That was a Toshiba Tecra, one I later used in 2004-2005). Sadly I can't swap my work T410 as I require a (standard) Linux laptop for a few reasons (that one will be upgraded to an X1 Carbon soon) but I may well buy one anyway.
I currently don't have a server in my Sydney apartment, sure I've got a Raspberry Pi doing a few minor things (Radius, syslog, NTP, DNS), but those are all for my network devices, I don't use them directly.
What I'd love to be able to buy is some form of mini-cluster, probably in the form of a 3ish RU rackmount device that has a gigabit ethernet switch as a backplane, and lets me slot in CPU blades.
Ideally I'd want two kinds of blades, the first being ARM64, with as much RAM as parts will allow (8-32GB), and two disks (allowing a mix of spinning for bulk, and SSD for speed). These would mostly run something like OpenShift
The second set of blades would be standard Intel i7, again with as much RAM as possible (32-64GB), and the same storage as the ARM's (or external) to allow VM's where I can't just run on ARM.
Obviously there'd need to be some form of self-backup, and ideally the cluster admin role could automatically failover between nodes as needed.
With something like this I could deploy it at the three sites I've got servers, allocating an IPv6 /64 to each site, which would let every instance have static addressing and just migrate between nodes.
ARM64 systems have been talked about, but I've yet to see a serious design that isn't a dense cluster, only useful if you've got hundreds of nodes.
I've put money into a lot
of Kickstarter projects, and I finally got around to doing a summary as to what I've spent:2012:
19 projects, total of US$3,295 pledged (however $194 was to projects that failed to reach target)
- 2 failed to reach target
- 6 received
- 11 are in clear progress or otherwise still seem likely to ship (several are in progress of shipping or have delivered early parts [eg, ebook before printing])
17 projects, total of US$1,943 pledged (however $10 was to projects that failed to reach target)
5 projects, total of US$1,177 pledged
It's time for another post pointing out what I think Juniper should do (Two previous posts in October 2011 covered Qfabric
and other switching
Thankfully it's no longer a secret that I use (the big) Juniper kit at work
(and yes I've logged in to some of the boxes in that photo this week). I also have a bunch of personal contacts at Juniper, but I know nothing about any unreleased products.
Qfabric has really improved, and people really are deploying it, of my previous list the things that really stick out are still the lack of MX & SRX integration, especially as we now know it's just software blocking it. For smaller sites if they offered an inbuilt 24-port switch on the control plane nodes it would remove the control plane aggregation boxes.
I'm still amazed they're yet to offer a shared external control plane (for EX, QF and perhaps MX80) and serious BGP route reflector, surely the profit made from selling M7i's and MX240's (for those with higher BGP needs) for only BGP reflection can't be that high.
While I'm on MX the MX80 is starting to look a little long in the tooth, I believe if they were to do a refresh using the chips in the MPC4 they should be able to double the forwarding throughput (I'd hope they seriously improve the control plane at the same time).
One more thing I mentioned in one of the previous entries was how I'd love a version of the EX2200-C that had 10g uplinks, I really doubt that power is the reason they don't offer it (although perhaps they wouldn't be able to do a PoE variant of that combo). In fact I've just ordered two of the PoE version to use in my home networks both in Melbourne and Sydney.
The EX8200 another year on is now looking very old, and could really use a new fabric to give it denser 10g, and offer 40g or even 100g ports for those who don't want to go down the Qfabric path.
Speaking of refresh, it might be interesting to see a device in between the SRX650 and the SRX1400, as Cavium (who make the processors used in the lower-end SRX) have some nice options now.
While on SRX it's beyond pathetic that now in 2013 they still don't offer usable IPv6 supporting CPE (even on the v4 side there's enough limitations that they still don't make a good CPE on their own).
Sadly this leads to my last point, JunOS' release managment has been going downhill for as long as I've been using their kit (since 2009), and despite several re-orgs it still doesn't seem to be getting much better. Now, despite dropping back from four to three releases a year they failed at even that, as 12.3 still hasn't been released.
Earlier in the year they were trying to hire people in the JunOS PM area, I hope they managed to find someone good.
It has very much become a tradition for me to read Cory Doctorow's latest book whilst flying to or from the US, and write a blog post. So this time, while I sit in my business class seat (points upgrade, sadly the company doesn't pay for it) here's this years edition.
Before I get to that, this year's trip was very short, I flew in last Sunday (the 11th) and am flying back on Monday the 19th (US time, get back Wed 21 AU time). Excluding the two days in the US lost to travel I've worked pretty much a full day every other day I've been here, making what's probably a 70 hour week, luckily this is a very rare event, and in this case mostly my own fault for trying to stuff as much as possible in the trip.
For this trip I decided to skip the A380 and went classic with the 747, certainly the premium product is fairly similar, although it's down on the main deck, not the upper which is a shame.
On this return journey I managed to snag a (points) upgrade to business, and return to my favourite aircraft cabin, the 747-400's upper deck. As it happens this was the ill-fated QF108 which simply circled the ocean outside LA for an hour dumping fuel before it returned to LA sending me to the LAX Hilton for a day to wait for a replacement flight back to SYD which turned out to be QF12, the A380 LAX-SYD service.
Now having tried both I think I do still prefer the upper deck on a 747 over the A380 but it really is only that sense of privacy you get with the small cabin, other then that the A380 probably wins in every way (except perhaps speed with the A380 cruising over 100km/hour slower then I've usually seen a 747, making the pacific crossing over an hour longer).
This is actually my first long-haul business class flight, I've done the LAX-JFK Qantas flight in business (in 2008 when I went to HOPE), but at only around five hours that's much shorter then the full 14(ish) hours of the full transpac flight, and although I got a business seat on a LAX-MEL flight back in 2009 that was still with premium service, something, until now, I considered fairly minimal. I do still think that for what you get business class is overpriced, but if (as I feel) it's saved me a day of recovery so I can get back to work a day earlier that can easily be worth the difference in price as you move only a level or two up in an organisations hierarchy. Just having a nice, quiet, catered lounge in LAX is worth paying some amount for, although the Internet there is only barely usable (and Qantas are *still* stealing other people's IP space for their captive portals which is a pathetic indictment of whoever runs them).
Although I've started reading "Rapture of the Nerds" I'm only about a third of the way through it, so far it seems very inspired by Anathem (et al, there's many other books in this vein) and it really isn't grabbing me like much of Cory's other work has.
1: 2011 edition: http://laptop006.livejournal.com/55566.html
, 2008 edition: http://laptop006.livejournal.com/45918.html
, including this post that's a 60% hit rate, missing only my 2009 and 2007 trips, I did write a few posts about the 2007 trip, but there's about an 18-month empty space around my 2009 trip.
(if you like the wallpaper how about the rest of my machines
This weeks project was building a Raspberry Pi laptop. Inspired by other's early attempts
and the new v2 boards that are designed to allow reverse power feed from the USB host ports (saving the need to hack up USB cables), and the cheapness of the Motorola LapDock's on eBay (mine was < $80 at the time, although they've gone up quite a bit recently as it seems other's have had the same idea.
From the back you can see how it's put together, the pi is inside a 3D printed case
(Thanks to Jan at work who has a makerbot at his desk, and responded to my my "hey would someone consider helping me with this" with "I've done a test print, come try it!"), and that case is simply attached to the outside screen portion of a Motorola LapDock (the early Atrix version) which I purchased on eBay for less then a hundred dollars shipped.
From there it's a few simple cables and adapters to use it:
- MicroHDMI female-female adapter & HDMI to microHDMI cable
- MicroUSB extension cable & USB-A to microUSB female adapter
I purchased these all from eBay as well for a total of about $10
I did remove the dock part of the lapdock to achieve a more solid design, and then used Sugru
to make the connectors and cables into one solid unit.
There are a few issues with this:
- Battery life is fairly low, I suspect swapping out the linear regulator as others have tried would help a fair bit.
- Closing the lid interrupts power, crashing the pi (and corrupting the root filesystem fairly often), adding a supercap to the pi has solved this for others.
- Power is controlled by (un)plugging the HDMI connection between the two, this is a pain, and not trivially solvable.
- Although this is a 512MB pi the firmware needed to use the other half of memory hasn't hit the Raspbian repo's yet, and I would rather keep an updatable Raspbian then get the extra ram.
- The keyboard, whilst better then the netbooks of old, is still tiny and hard for me to type on.
- The touchpad is giant, but low sensitivity to movement, but high sensitivity to tapping, this is probably fixable in config.
- Need to find a good pi-compatible wifi adapter.
Ultimately while this is a nice project to use as a platform to demo the pi, if you actually wanted a decent ARM netbook/laptop I'd just go buy one of the new Chromebook's, which already have docs up on how to run your ARM-capable distro of choice.
Today several people linked to a piece
on the "Transition Darebin" website. I think it's worth pointing out that it appears that this group doesn't actually seem to be associated with the Darebin council, only composed of members mainly from that region.
It says a bunch of things I, and many others have a problem with.
I'll just respond to a few points:
1. Delete unused data - Actually the energy costs of you interacting with computers to find and delete data are almost certainly far greater then the cost of storing that data indefinitely, as long as it's not media (audio, picture, video) it's most likely very small. The major reason to consider deleting data is is it starts to become painful to find data, not (in general) the amount. At "large enterprise" scales the cost to store a terabyte, in a highly available fashion is now in the low hundreds of dollars a year, that involves having multiple copies available online at all times, and several copies in multiple locations on tape. When done in the form done by most web sites it's much less. Even if I include all my (archived) received e-mail my personal data from the last decade would easily fit on a DVD.
2. "save large files that only need to be read-only as PDFs" - This is actually a good thing to do, but for a different reason. Any file that's in a proprietary format (eg, all MS Office docs, even the newer docx/xlsx/etc. ones) may become unreadable at some point in the future if the software to read it becomes unavailable. Converting the data into a simple format (plain text, html, etc), or at least a (publicly) standardised format such as PDF helps prevent data loss.
3. "Delete accounts you no longer use" - worth doing to prevent issues when accounts and/or services get hijacked, but as mentioned above, the cost of the energy used to delete the data is probably higher then the cost to store indefinitely.
But there's other things not worth doing:
1. Unplugging "phantom loads". The vast majority of phantom loads really are trivial, and are not worth your time to unplug, this is getting even better as higher efficiency standards are being introduced. As a general rule if a (small) device is not noticeably warm to the touch when unused it's not consuming a large amount of power and you shouldn't bother unplugging it unless you won't be using it for weeks on end. Taking an occasional cool shower can save much more energy (especially if you use electric hot water)
Some things that might actually be worth doing:
1. Upgrade hardware every few years (and *properly* recycle old hardware). If you heavily use your computers and other devices the efficiency gains from new hardware can easily justify themselves, for businesses this is often *trivial* yet many companies don't consider this. At my previous job replacing all the servers with fewer, current generation, high performance servers would cut more then a third off a very expensive power bill, and mostly justified the equipment on that merit alone.
2. Fix teleworking. This more then most "digital" things is an issue, and one that the NBN won't solve, companies need to think about what roles remote, or even just occasionally remote, workers can take up. The fossil fuel saved from just a fraction of people commuting less often pays for the energy costs of one *very* large network. The NBN doesn't solve this because the majority of the problem isn't technical, but the policies and procedures at each individual company.
So one of the guys at our office somehow ended up with two Raspberry Pi's
from the first batch, as one was enough for him to play with he offered the other one up, and I turned out to be the only person in the office who wasn't so lazy as to not walk over to the other building where he was to borrow for the weekend.
Here's a bunch of useful things that you probably want to do with the default Debian installation to make it more usable.
First, please don't give the foundation guys flack for any of these issues, a decent distro is hard, and I've paid hundreds of times more then this and gotten a horrific hack-job of (usually) debian (often with a kernel already years out of date, istead of one from this year). This really isn't too bad for a first go.
If you're using the pi on a network, or in a public place there are a few things to consider, it's actually pretty good compared to most embedded images I've seen.
Regenerate SSH keys
The pi already has SSH keys on the image, this is a security issue as it makes you a very easy target for MITM attacks.
As root run:
Note this enables SSH server on boot, so disable it if you want, see the note below about NFS, just use "ssh
" as the service. If you've used SSH before this you'll need to delete your existing entry on your client before SSH will let you connect due to the new keys.
Consider disabling NFS client (the sole open services by default)
Other then the ports being open this has no security implication, but it does save a lot of boot time.
update-rc.d portmap disable
update-rc.d nfs-common disable
Delete the pi user
Or at least change its password. If you create another admin user consider removing pi
" has an invalid password (same as Mac OS, Ubuntu, etc.). The users "tli
" and "pnd
" exist in /etc/shadow
with passwords (but not /etc/passwd
). The user "suse
" also has full root by sudo
, but doesn't exist.
Most of us don't use UK keyboards, you can switch to your local layout by running "dpkg-reconfigure keyboard-configuration
". you may want at least a qwerty (if not UK English) layout keyboard for this step, will be hard without one.
I think the concept of a "British Summer" is an oxymoron so I want to change the timezone to something more relevant to me.
You can do that by running "dpkg-reconfigure tzset" (again, sudo for root if needed)
If you're using a pi as a server you might want to disable console blanking so if you connect a monitor you don't need to hit a key to wake it up (which you might not be able to do if you've somehow crashed it).
To do this edit /etc/kbd/config
and change BLANK_TIME
You may wish to change to a local debian mirror by editing /etc/apt/sources.list and changing "uk
" to the appropriate two letter code (debian mirror list
), then as with all apt based systems, "apt-get update
" to find new packages, apt-get dist-upgrade
to upgrade to them (you should be careful what you install unless you've expanded the filesystem as there's not much free space).
I'd actually suggest the following as a good base debian apt set, these include security updates:
# Main, the core of debian
deb http://ftp.us.debian.org/debian/ squeeze main contrib non-free
#deb-src http://ftp.us.debian.org/debian/ squeeze main contrib non-free
# Security updates
deb http://security.debian.org/ squeeze/updates main contrib non-free
#deb-src http://security.debian.org/ squeeze/updates main contrib non-free
# Other important updates before point releases
deb http://ftp.us.debian.org/debian/ squeeze-updates main contrib non-free
#deb-src http://ftp.us.debian.org/debian/ squeeze-updates main contrib non-free
The commented out lines are for source packages, unless you plan to do debian package development on the board itself they're not worth it
You can (but probably shouldn't unless you like killing SD cards) enable swap by uncommenting the swap line in /etc/fstab
and rebooting or running "swapon -a
Expanding the filesystem to use all (or just more) of your SD card
*WARNING* This is only applicable to the 19/April/2012 Debian build, it's very easy to destroy data by doing this wrong.
I installed on an 8GB card (as it was all I had lying about) and wanted to use all the space available. If you're going to expand the filesystem I'd suggest doing it straight away so you won't feel bad if you stuff up and destroy the OS on the card.
All of this procedure needs to be run as root.
First, change the partition size:
use these commands:
- Type "p" and press enter, note the "Start" number of p2 (in this image, 1233)
- Delete the swap partition with "d" then "3"
- Delete the root partition with "d" then "2"Recreate the root partition with "n" then "2", then start cylinder (1233 for mine), then either press enter for all the card, or follow the instructions for otherwise (using anything less then the old End cylinder of p2 will break your system)</li>
- Verify things look ok by printing the table again ("p")
- If they're all good use "w" to finish.
Once the system is back to finish expansion run:
(This took several minutes on my 8GB card)
You can verify the result with "df -h