This 1950s US Navy training film explaining how simple mechanisms can by used to perform mathematical functions. I particularly liked the tangent cams and barrel cams.
Category Archives: Tech
Vultures Picnic and Diesel Generators
Vultures Picnic is the latest work of investigative reporting by multi-award-winning journalist Greg Palast. I will admit to being ill-disposed to this book as I find Palast’s prose style teeth-grindingly irritating. The book is couched more in the style of a travelogue mixed with a murder mystery. If you like your journalism dry and well-referenced this book is not for you.
I have no idea whether the criminal and political conspiracies laid out in the book are accurate. I’ve seen some of the same claims made by other journalists — usually much better sourced — so I’ve no reason to believe that Palast is making anything up. His nose for politics and money seems good, but he seems to have a blind-spot when it comes to technical issues. He often seems to rely on a single source that tells him what he wants and never mind the evidence.
In one particularly jaw-dropping chapter Palast makes the bold claim that the emergency diesel generators (EDGs) on all nuclear power stations can’t work. To quote from page 343..
In other words, the diesels are junk, are crap, are not capable of getting up to full power in seconds, then run continuously for days. They’re decorations attached to nuclear plants so people will think these radioactive tea kettles are safe.
The nice thing about this claim is that it’s easily testable. Has a nuclear power station ever run on its EDGs after losing off-site power? As it happens, yes.
In August 2011 The North Anna plant in Virginia was hit by an earthquake just off the coast. This event caused both its reactors to go into emergency shutdown. As well as triggering this shutdown the quake also took out the offsite power. All four EDGs at North Anna sprang into life and supplied power to the emergency systems until offsite power was restored; exactly as they were designed to, and exactly as Greg Palast says they can’t.
So why does Palast make this claim? Well he has history with EDGs. He was part of the legal team that charged LILCO with conspiracy charges around the building of Shoreham Nuclear Power Station. The corrupt and incompetent LILCO bought prototype, untested generators from a company called Delaval rather than proven generators that would have cost 5% more. These three underspecified generators promptly failed on testing and LILCO were forced to buy the more expensive generators.
Now, I can see how that experience could sour one on the nuclear industry, but it’s a long way from “these three generators failed” to “all generators must fail”. This is where Palast’s inside man appears. There is a long conversation with an unnamed engineer wherein Palast is informed that starting EDGs to full power in seconds puts lots of mechanical strain on them and they aren’t designed to deal with it. The engineer makes the point that the same diesels when used to pull trains or power ships are generally warmed up slowly. All of which is true, but beside the point. EDGs are required to run for hours not years, so shortening their operational life a bit by hard-starting them is a sensible trade-off.
If you are particularly cynical you might be thinking that they got lucky at North Anna. How do we know other generators would perform as well? Because the NRC requires you to actually test that they work.
For the purposes of SR 3.8.1.2 and SR 3.8.1.7 testing, the DGs are started from standby conditions. Standby conditions for a DG mean the diesel engine coolant and oil are being continuously circulated and temperature is being maintained consistent with manufacturer recommendations.[ In order to reduce stress and wear on diesel engines, the DG manufacturers recommend a modified start in which the starting speed of DGs is limited, warmup is limited to this lower speed, and the DGs are gradually accelerated to synchronous speed prior to loading. This is the intent of Note 2, which is only applicable when such modified start procedures are recommended by the manufacturer.
SR 3.8.1.7 requires that, at a 184 day Frequency, the DG starts from standby conditions and achieves required voltage and frequency within 10 seconds. The 10 second start requirement supports the assumptions of the design basis LOCA analysis in the FSAR, Chapter [15] (Ref. 5). ]The 10 second start requirement is not applicable to SR 3.8.1.2 (see Note 2) when a modified start procedure as described above is used. If a modified start is not used, 10 second start requirement of SR 3.8.1.7 applies.Since SR 3.8.1.7 requires a 10 second start, it is more restrictive than SR 3.8.1.2, and it may be performed in lieu of SR 3.8.1.2.In addition to the SR requirements, the time for the DG to reach steady state operation, unless the modified DG start method is employed, is periodically monitored and the trend evaluated to identify degradation of governor and voltage regulator performance.
Got that? Every 31 days do a gentle start of the EDG (if the manufacturer recommends it, else do a hard start) and every 184 days hard-start the EDGs to prove they can get to full power in 10 seconds. The only difference between the hard-start test and an emergency is that you are allowed to run the oil and lube pumps beforehand to reduce the wear on the engine.
If you have a problem with that testing regime take it up with the NRC, and don’t believe everything Greg Palast tells you.
References:
Westminster Skeptics: The Revolution Will Be Digitized
After being in London for nearly a year I’ve finally managed to make it to a Westminster Skeptics in the Pub meeting. I was rewarded with an entertaining and thought-provoking talk by crusading journalist Heather Brooke. Based on her new book it focused on journalism in the digital age with examples from the MPs expenses scandal and the on-going saga of WikiLeaks. The spirited Q&A that followed brought up a whole load of issues.
The key things that jumped out at me (in no particular order):
If we believe as a society that journalism is important for holding the powers that be to account, how are we going to pay for it? I have to say that I occasionally feel guilty about not subscribing to a daily newspaper, but then I remember that I really don’t want to pay for sports reporters and alternative health correspondents. What’s worse is that I’m an absolute news junky, so how does a news organisation turn someone like me into a customer? I have no answers here.
Something that came up a few times was the idea of journalists as filterers and synthesisers of information. The way way this was being described made me think of Librarians. This isn’t an insult, as a former denizen of academia I have had occasional dealings with real librarians; like Dragons they are of fearsome aspect, capable of deep magic and to be treated with the utmost respect. My gut feeling is that the real difference between a Librarian/Curator and a Journalist is narrative.
The issue of releasing redacted vs complete source material was circled around a few times. The whole business of redacted material bothers me quite a lot. Back when I had peripheral involvement in some medical data projects the topic of anonymisation of patient data was discussed quite a lot. If you anonymise clinical data properly you are allowed to store it for research purposes on networks that don’t meet the same security standards as the clinical network. The reason I mention this is that some research has been done on how much anonymous data you need before you can start identifying individuals. As I recall it needs rather less data that most people suppose. This makes makes the appropriate redaction of source material a difficult process. I’ll have to see if I can track down the papers I dimly remember on this subjects.
Which sadly brings us to the final question of the evening. Reading a book in an hour standing up in Waterstones and not buying it is not an indication of your intellectual prowess it’s an indication that you are an arse. Also, if you are going to insult the speaker please try to articulate an actual question. Accusations of hypocrisy, and mentioning that Julian Assange tried to kiss the speaker do not a cogent argument make.
Right, it’s a school night. I must be off. Looking forward very much to the next Westminster Skeptics.
This post brought to you by beer.
#edited for spelling and links
Willing to pay for sync?
Matt Assay had an interesting post up on the Register today about Amazon’s kindle store outselling Apple’s ibooks. His thesis is that this is driven by the fact that Amazon has a much larger range of titles and that kindle will run and sync across lots of platforms.
It’s sometimes said that people won’t pay for sync, and that they don’t value choice. Kindle’s ebook sales compared to Apple’s iBook sales suggests otherwise. Syncing across different devices matters. Choice matters. The proof is in the sales figures.
Even given the weasel words and spin that are customarily embodied by corporate sales figures, I’m willing to believe that Amazon are selling a lot of more ebooks than apple. However I’m not entirely convinced that sync is the reason for that. Mr Assay’s assertion about the reasons for this difference in sales really amounts to a hypothesis. I am a happy Kindle user, and while I only represent a sample size of one, let’s see if my experience supports this hypothesis.
I finally jumped aboard the Kindle store when Amazon released the Kindle App for Android. I’d been waiting for the paperback release of The Fuller Memorandum by Charles Stross (I am a huge Stross fan). The reason I don’t buy hardbacks is because I just don’t have that much space to dedicate to one book in my tiny flat. Much as I’d like my own Library it’s not going to happen in this lifetime. So the availability of kindle on a device I already owned together with a kindle version of a book I really wanted to read sold me on the idea. At this point I figured the worst that could happen is I’d waste a few dollars if the reading experience wasn’t that great.
Reading on the small screen of my HTC Hero was remarkably easy; I devoured the book in a couple of sittings. Over the next few months I bought more books and enjoyed the fact that I didn’t have to buy yet another bookcase. I finally gave in and bought a Kindle this year so that I could take more books on holiday with me than I normally do, since the battery on the Kindle would last all week whereas my phone would barely last a day.
I almost never read the same book on both devices. The sync functionality goes unused. So let’s see how Mr Assay’s hypothesis does against my experience.
1) Kindle ebooks can be read on a lot of different platforms.
Check. It was Kindle’s availability on Android that first got me hooked. At this point I wasn’t ready to invest in a hardware Kindle and I sure as hell wasn’t going to buy an iPad just to read books on. If I’d been an iphone user maybe the story would have been different.
2) Range of available titles.
Check. A specific book I really wanted was available, the fact that they had that title it was pushed me past my reluctance to spend money.
3) Synch between different devices.
Nope. I really don’t use it. Don’t get me wrong, I’m glad it’s there, but it played no part in my decision to buy or continue to buy Kindle ebooks.
There is also another aspect of the Kindle store that wasn’t mentioned that I think is very important. Amazon believe in making it as easy as possible for you to give them money. UK credit cards have always worked in the amazon.com store, which is where I first bought Kindle ebooks from. When the UK store became supported in the Android Kindle app Amazon made it very easy to transfer your account from the US store to the UK store. This has never caused a problem and all the books I bought from the US Kindle store still work just fine. My Kindle has never warned me about the maximum number of Kindles I can connect to my Kindle account before I become de-authorised.
I always get that feeling that Amazon is more interested in selling you stuff. Whereas Apple is interested in selling you things; the stuff is just there to entice you to buy the things that the stuff runs on.
Stepping Through Large Database Tables in Python
In order to report usage on our PBSPro compute cluster at work I wrote a simple set of python scripts to dump the accounting information into a MySQL database. This has been working fine for the last year churning out reports every month.
This week I had cause to generate some statics aggregated across the whole three years of the data in the database. I’m using a mixture of Elixir and SQLalchemy to talk to the database. Normally I would do something like this:
mybigtablequery = MyBigTable.query() for job in mybigtablequery: if job.attribute = "thing": dosomething()
Which worked fine when the database was quite small. I was horrified to see that as this loop went on my python process used more and more memory because the database connection object never throws away a row once it has been loaded. Fortunately I found an answer on stackoverflow.
So I ended up doing the following:
def batch_query(query,batch=10000): offset = 0 while True: for elem in query.limit(batch).offset(offset): r = True yield elem offset += batch if not r: break r = False mybigtablequery = MyBigTable.query() for job in batch_query(mybigtablequery,50000): if job.attribute = "thing": dosomething()
“batch” is just an integer defining how many rows will be fetched by each query. The larger this is the more memory the python interpreter will use but the more efficiently the code will run.
Flexlm License Servers and Firewalls
If you are lucky enough to run your flexlm servers on a tightly controlled corporate network then you probably just turn the firewall off on those servers and get on with your life. Everyone else goes through a certain amount of hair-pulling before they work out how to make flexlm play nicely with firewalls. So I’m writing this post to document the process as much for me as anyone else.
So let us say that you have bought five copies of Bob’s Magical Pony Viewer an awesome graphical client that you can run to show you ponies. In order for Bob to be sure that you only run five copies he has used flexlm to secure his software. You have received a network license for BobSoft that looks like:
SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"
So you think, great we can set that up with only port 2020 open on the license server and everything will excellent. Ponies for five concurrent users, hurray!
Except of course when you try that Pony Viewer adamantly claims that it can’t contact the server. Even though you can netcat/telnet to port 2020 on that server and the flexlm logs tell you that the server is running just fine.
It’s helpful at this point to have a copy of lmutil around to debug the problem. I don’t know where to get lmutil from as it came bundled with the license server software from one of our vendors. But it’s very useful when trying to work out what is going on.
So lets try some things.
#>lmutil lmstat -c 2020@license1
lmutil - Copyright (c) 1989-2004 by Macrovision Corporation. All rights reserved.
Flexible License Manager status on Thu 1/21/2010 19:56
Error getting status: Server node is down or not responding (-96,7)
This is the point at which one normally starts with the hair-tearing. The thing to realise about a flexlm server is that it’s actually two daemons working together. The lmgrd which is running on port 2020 and the vendor daemon (in this case bob_lm) which will start up on a RANDOM port. What is even better is that the vendor daemon will choose a different random port every time you restart the license server.
While discussing this with some fellow sysadmins it turns out that there is another option you can add to flexlm license files which ends this misery. You can tell the vendor daemon to start on a specific port. Like so:
SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm port=2021
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"
And now when we try lmutil
#>lmutil lmstat -c 2020@license1
Flexible License Manager status on Thu 1/21/2010 20:06
License server status: 2020@license1
License file(s) on license1: /opt/BobSoft/license.dat:
license1: license server UP (MASTER) v11.6
Vendor daemon status (on license1):
bob_lm: UP v11.6
Hurray Ponies!
One last thing to note make sure the hostname you specify in the license file matches the hostname of the license server and also the hostname you use when connecting to the server. This is because flexlm sends the hostname asked for as part of the license request and if the two don’t match you won’t get any ponies.
In short flexlm is a dreadful license server it’s just that all the others are even worse.
SC09 – Interesting Tech – Filesystems/Storage
All the usual suspects were visible in Portland this year. Including Panasas, Data Direct, Isilon, Lustre and IBM/GPFS. But we’ve all seen those all before. Two storage related technologies caught my eye at SC09 because I’d never seen them before.
I caught a technical session from a Korean company called Pspace. They developed a parallel filesystem called Infinistor for a couple of big Telco/ISPs in Korea. It’s a pretty straight-forward parallel filesystem with metadata and object data handled by separate servers. Object servers are always at least N+1 so that you can lose a whole object server without losing access to your data.
The neat things about infinistor are that it keeps track of how often data is accessed and it understands the concept that some storage is faster than others. So you could have some smaller servers based on SSD and Infinistor will replicate frequently accessed content to the fast disks. It can even handle multiple speeds of storage within one object server.
As you might expect from a project born in ISP-land it has a lot of support for replication across multiple sites. Since it’s always good to serve your client data from a node close to them on the network. Infinistor can replicate synchronously or asynchronously. With the latter prioritised for frequently accessed content.
File access is POSIX filesystem (It will do NFS or CIFS) or a REST API.
As ever with big conferences not everything you learn comes from the sessions or the Exhibition hall. I got chatting to an engineer from Pittsburgh Supercomputing Centre about the parallel filesystem they wrote called ZEST. The best thing about this filesystem is that you can’t read from it.
So I should back up for a second here and describe the problem ZEST is trying to solve since most of you are probably thinking “what use is a filesystem you can’t read from”. Here in HPC land we have all these big machines with thousands of very fast cores and big, fast interconnects. All this cost money. Unfortunately the more nodes you are running across the more likely you are to hit a problem (e.g dividing the current day in Mayan Long Count by the least significant digit in the firmware revision of your firmware cause your HCA to turn into a pumpkin or one of the million other failure modes that a wearyingly familiar to HPC ops people around the world). When this happens you don’t want to lose all the time you’ve spent up until the fault happened. And Lo unto the world did come Checkpointing.
Which is basically to say a lot of big codes will periodically dump their running state to disk so that in the event of a problem they can pick up from the last checkpoint. Now obviously this can be Terabytes of data and it takes a while to write it to disk. While you are doing that all those shiny, shiny CPUs are sitting idle. This makes the Intel salesman happy, but make your funding agencies cry.
So the approach in ZEST is to remove all the complexity involved in making a filesystem that you can read from in order to allow clients to write as fast as possible. There are a number of design decisions here that are interesting. ZEST storage servers don’t use RAID but assign write queues to each individual disk. All the checksumming and parity calculations are done on the client ( because these are over-endowed HPC nodes we are talking about here). By stripping away all this complexity ZEST aims to give each write client the the full bandwidth of the disk it’s writing to. Because most codes will be doing checkpointing from multiple nodes at once this is going to add up to significant aggregate bandwidth.
As an offline process the files that have been dumped to disk are re-aggreagated and copied onto a Lustre filesystem from where they can be read. So I kind of lied when I said it was read only. More technical detail can be found in the ZEST paper.
SC09 – Interesting Tech – Shared Memory
We are beginning to approach the end of the conference formally known as SuperComputing, so I thought it was about time that I started to write up some of the copious volumes of notes that have begun to clutter up the hard-drive of my netbook.
One of the problems we had when we performed our last procurement was that real shared-memory systems couldn’t be fitted into the budget so we had to make do with a set of 16-core commoditty boxes. We have some codes that could do with scaling-out a little bit further than that.
Which brings me nicely to 3Leaf who are building technology to hook multiple commodity boxes together so that the OS (a normal Linux build plus some proprietary kernel modules) sees them as one machine. All hardware on the individual nodes should be visible to the OS just like it would on a single machine. So you can do weird things like software RAID across all the single SATA disks in a bunch of nodes. 3Leaf caution that it’s possible that there is some funky hardware out there that wouldn’t interact well with their setup but they haven’t met it yet. The interconnect is InfiniBand DDR. While it’s not stated up-front by 3Leaf conversations with them indicate that the ASIC is implementing some kind of vitualisation layer which makes it sound sort of like ScaleMP in hardware.
A stack of 3Leaf nodes is essentially a set of AMD boxes with the 3Leaf ASIC sitting in an extra AMD CPU socket. The on-board IB is then used to carry communications traffic between the separate nodes. The manager node (a separate lower spec box) controls the booting and partitioning of the nodes such that a stack can be brought up as one big box or several smaller units.
My favourite thing about the 3Leaf solution is that you can add extra IB cards which will behave normally for the OS. This means you interface the stack to things like Lustre or NFS/RDMA over IB which many HPC facilities will already have in operation.
While currently AMD only 3Leaf claim they will have a product ready for the release of the next version of Intel’s QPI.
And in case you think this might be vapourware apparently Florida State have just bought one.
On a more traditional note SGI announced the availability their new UV shared-memory machine. Essentially an ALTIX 4700 with uprated numalink and x86_64 chips rather than Itanium. The SGI folks swear that there is no proprietary code necessary to make these machines work and that all the kernel support is in mainline. If so that is a very positive step for SGI to take. Hardware MPI acceleration is supported by the SGI MPI hardware stack. It wasn’t clear to me whether SGI are expecting other MPIs to be able to take advantage of this capability. Depending on the price-point UV might be a very interesting machine.
Speaking of all things NUMA I had an interesting chat to the chaps at Numacale. It turns out they are a spin-off from Dolphin. They are making an interconnect card on HTX that will do ccNUMA on commodity AMD kit. The ccNUMA engine is a direct descendant of the one in the Dolphin SCI system (I should note that we still have a Dolphin cluster in operation back home). Like SCI this interconnect is wired together in a loop/torus/3d torus topology without a switch.
Numascale have evaluation kit built on FPGAs at the moment and expect tot tape-out the real ASICs early next year. Like 3Leaf they claim to be working on version for the next version of Intel’s QPI.
And now we move from shared-memory to memory-sharing. Portland’s own RNA Networks have a software technology for sharing memory over IB. You can take chunks of memory on several nodes and hook them together as a block device to use as fast cache. If you stack mount this over another networked-filesystem it acts as an extra layer of caching. So access will go to the local page cache then over IB to the RNA cache and finally over the network the original filesystem. I can see a number of use cases where this could be used to add parallel scaling to a single network filesystem. Although at roughly $1000 per node plus IB I’m not sure it works out cheaper than some of the cheaper ethernet based clustered storage systems.
You can also use this memory-based block device to run a local paralell filesystem if you want although I can’t quite see the use case.
One thing I forgot to ask was whether the cache can be used a straight physical RAM for those really naive codes that just run a whole bunch of data into memory and could do with access to extra space.
Playing with Hugin Panorama Stitcher
I finally got around to playing with stitching some photos together using the Hugin stitching tool. Hugin is very powerful but I would have found it entirely incomprehensible if it wasn’t for an excellent tutorial from lifehacker.
I’m quite pleased with how this initial test came out so it’d definitely a technique I’ll use again in the future.
Europython – Days 3 to 5 – Roundup
As Europython got more hectic and my 3G connection got more erratic, my daily blogging ceased. So this roundup is mostly the result of notes I wrote on the train journey through the picturesque Welsh Borders back home to Cardiff. These are the talks that made an impact on me.
GIL isn’t Evil Russell Winder, who is every inch the sterotype of a former theoretical physicist, showed some simple benchmarks of threads vs Parallel Python vs Multiprocessing which showed that you could get good parallel speed-up in python by using the latter two approaches. We have a number of people who use Numpy and Scipy on the cluster and it would be interesting to see if we could get some quick speedups for them using these approaches.
Twisted, AMQP and Thrift A quick introduction to AMQP and the fact that lots of big financial companies are ripping out Tibco/IBM MQ to make way for AMQP. These guys wrote twisted interfaces to AMQP and Thrift so that you can make Thrift RPC calls and everything magically goes over AMQP. It was interesting but without taking a serious look through some example code I’m not sure that it will be useful for any of my particular projects.
PIPPER A python system where you add comments very much like OpenMP pragmas allowing you to parallelise For loops. It does this by serialising the function and the data that go to it and sending it over MPI to a c-based engine that runs the function and returns the data over MPI. This is nice because it lets you take advantage of the MPI and Interconnect on a proper compute cluster. However it can’t handle the full language set of python and you can’t use c extensions. Which means Numpy and Scipy are out of the window. Which is a shame because most of the codes that you could trivially parallelise with this system use Numpy.
Python and CouchDB An opinionated Mozilla hacker talking about how awesome CouchDB is. I understood it a bit more by the end of the session and kind of wondered what it would be like to dump log files straight into it. The talk in the corridors was the MongoDB looked a bit more production oriented. However I managed to miss that talk so will have to look it up later.
Keynote by Bruce Eckel. He started by pimping the unconference idea, which looked good to me. And got me thinking about whether there might be room for the approach at work. The language archaeology part was entertaining but I can’t remember a single thing from it.
Ctypes This was a really useful talk. Greg Holliing did a good job of going through some of the pitfalls of ctypes such as 32/64 bit int mis-match with the underlying c API. So you should always cast to one of the ctypes types to make it explicit what you are passing through. This was probably the talk that is most likely to make an impact on my production code over the next twelve months.
OpenERP In hindsight I should have also gone to the OpenObject talk which explained the underlying data model. The best thing about this was that each module can stand alone. i.e you can install the inventory module or the CRM module without all the others. OpenERP speaks a Web Services API so it would be very easy to develop against it. There is a chance I may be able to solve some of the organisational challenges at work by throwing this tech at them.
Python for System Admin A good talk somewhat hampered by John Pinner’s need to support Python2.2 so some of the code example looked a bit strange. John is of the opinion that argparse is better than optparse (which I habitually use). One of the other attendees pointed me to a pycon2009 talk on argparse which apparently explains the difference.
Software Apprenticeship An interesting approach to training programmers that made a lot of sense to me. In Britain we have an awful tendency to belittle vocational training in comparison to academic education when for a great many professions we could do with more of the former and less of the latter. Lots to think about and Christian Theune provided a wealth of advice based on his practical experience of helping to train apprentices.
This was my first EuroPython and I found it educational and entertaining. I was exposed to lots of interesting technology, some of which may improve my daily work. More important than that was the opportunity to talk to other Python developers about their experiences of using Python to get real work done. EP is in Birmingham again next year so I have no excuse not to attend. Many thanks to all the hard-working people who helped to organise this conference and the wonderful delegates who made it so much fun to attend.