Monday, December 28, 2009

Dist::Zilla -- part 1

Inspired by RJBS's advent article on Dist::Zilla, I'm getting ready to give it a spin.

Install Dist::Zilla

Install a bundle. This works fine, but didn't bring in Dist::Zilla:

% cpan Dist::Zilla::PluginBundle::Git

./Build install -- OK

Attempt to install the base Dist::Zilla, but it failed:

% cpan Dist::Zilla

Running make test
Has already been tested successfully
Running make install
Already tried without success

Cleaning my .cpan/build directory and trying again.

Before cleaning up the files, I'll check the makefile for the preqs, to see if I can narrow down the issue. I then cleared out my build space, manually installed the prerequisites, and then installed Dist::Zilla. This worked.

[andrew@mini]% perl Makefile.PL 0 ~/.cpan/build/Dist-Zilla-1.093400-1o8qqf
Warning: prerequisite Config::INI::MVP::Reader 0.024 not found.
Warning: prerequisite Config::MVP 0.092990 not found.
Warning: prerequisite Hash::Merge::Simple 0 not found.
Warning: prerequisite Moose::Autobox 0.09 not found.
Warning: prerequisite MooseX::Types::Path::Class 0 not found.
Warning: prerequisite PPI 0 not found.
Warning: prerequisite String::Flogger 1 not found.
Warning: prerequisite namespace::autoclean 0 not found.
Writing Makefile for Dist::Zilla

[andrew@mini]% rm -rf ~/.cpan/build/*

[andrew@mini]% cpan Config::INI::MVP::Reader Config::MVP Hash::Merge::Simple Moose::Autobox MooseX::Types::Path::Class PPI String::Flogger namespace::autoclean

...[snip]...[this brought in a lot of deps]
/usr/bin/make install -- OK

[andrew@mini]% cpan Dist::Zilla

Installing /apps/perl5/bin/dzil
Appending installation info to /apps/perl5/lib/perl5/i486-linux-gnu-thread-multi/perllocal.pod
/usr/bin/make install -- OK

And if I'm going to cargo cult from RJBS and use his tool, then I might as well go all the way by installing the RJBS plugin bundle.

cpan Dist::Zilla::PluginBundle::RJBS

Now what? Using Dist::Zilla

% dzil new My-Project
will create new dist My-Project in obj(/home/andrew/src/My-Project)
$VAR1 = {};
% cd My-Project
% ls
% cat dist.ini
name = My-Project
version = 1.000
author = andrew
license = Perl_5
copyright_holder = andrew


% mkdir lib t

Now, create a stub module in lib/My/, something like this (copied straight from the quoted article):

use strict;
package My::Project;
# ABSTRACT: our top-secret project for playing bowling against WOPR

use Games::Bowling::Scorecard;
use Games::War::Nuclear::Thermonuclear::Global;
use Path::Resolver 2.012;

=method play_a_game


This method starts a game. It's a strange game.


sub play_a_game { ... }


The #ABSTRACT comment will be pulled out and used as META data.

now, let's build the module:

% dzil build
beginning to build My-Project
guessing dist's main_module is lib/My/
extracting distribution abstract from lib/My/
couldn't find a place to insert VERSION section to lib/My/
rewriting release test xt/release/pod-coverage.t
rewriting release test xt/release/pod-syntax.t
writing My-Project in My-Project-1.000
writing archive to My-Project-1.000.tar.gz
And now take a look at what it built:

% find My-Project-1.000

This created the META.yml file, built the MANIFEST, created two additional tests: release-pod-syntax and release-pod-coverage, built a README and copied in the correct LICENSE file. And then it tarred it all up for me. Excellent.

There are additional plugins that can be used within the dist.ini file. [@Git] will verify that all the files are checked into git before doing the build. [@RJBS] will use the RJBS bundle, to pull in all the steps he normally uses for a module. Searching for Dist::Zilla::Plugin on cpan produces 6 pages of results.

I'll post an update as I work on using dzil for a real module, and let you know how it goes. So far, I'm pretty excited at keeping my code and boilerplate separated.

Saturday, December 26, 2009

More Perl Advent Calendars

After I finished my post on perl advent calendars I stayed up for two more hours and finished reading all of RJBS's calendar. A midnight to 2am well spent.

Working through the perl advent calendar, I found links to a few more.

Plack Advent Calendar miyagawa walking us through the new hawtness that is psgi & plack.
PSGI, plack
Runs us from day 1 of installing plack through creating apps and multiple apps living on one install and beyond. A nice addition to the plack documentation. We should all learn this one, and start building against the PSGI specification so our webapps can be deployed across a range of server types. has several advent calendars up: hacker, casual, dbix-skinny, data-model.
They are all written in japanese, but the code snippets are in perl.

Day 2 of hacker track has a nice piece on opts which is a DSL (domain specific language) for command line parsing, a nice wrapper around Getopt::Long.
use opts;

opts my $foo => { isa => 'Str', default => 'bar' },
my $bar => { isa => 'Int', required => 1, alias => 'x' };

Merry Christmas indeed!

installing hadoop on ubuntu karmic

Mixing and matching a couple of guides, I've installed a local hadoop instance on my netbook. Here are my notes from the install process.

I'll refer to the guides by number later. Doc 1 is the current #1 hit for 'ubuntu hadoop' on google, so it seemed a good spot to start.



1) created a hadoop user and group, as per document 1. Also ssh-key for hadoop user. (currently no-password, will check that soon).

2) added jaunty-testing repo from cloudera, see doc 2. They don't have a jaunty package yet. Add /etc/apt/souces.list.d/cloudera.list

#deb karmic-testing contrib
#deb-src karmic-testing contrib
#no packages for karmic yet, trying jaunty-testing, jaunty-stable, jaunty-cdh1 or jaunty-cdh2
deb jaunty-testing contrib
deb-src jaunty-testing contrib

3) install hadoop:

[andrew@mini]% sudo aptitude install hadoop                                                                   0 ~/src
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Reading extended state information      
Initializing package states... Done
"hadoop" is a virtual package provided by:
  hadoop-0.20 hadoop-0.18 
You must choose one to install.
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 25 not upgraded.
Need to get 0B of archives. After unpacking 0B will be used.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Reading extended state information      
Initializing package states... Done

3b) sudo aptitude update, sudo aptitude install hadoop-0.20

[andrew@mini]% sudo aptitude install hadoop-0.20                                                              0 ~/src
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Reading extended state information      
Initializing package states... Done
The following NEW packages will be installed:
  hadoop-0.20 hadoop-0.20-native{a} 
0 packages upgraded, 2 newly installed, 0 to remove and 25 not upgraded.
Need to get 20.1MB of archives. After unpacking 41.9MB will be used.
Do you want to continue? [Y/n/?] Y
Writing extended state information... Done
[... snip ...]
Initializing package states... Done
Writing extended state information... Done
4) this has setup our config information in /etc/hadoop-0.20, also symlinked as /etc/hadoop/ is loaded from /etc/hadoop/conf/ (aka /etc/hadoop-0.20/conf.empty/

Modify to point to our jvm. Since I installed sun java 1.6 (aka Java6), I updated it to: export JAVA_HOME=/usr/lib/jvm/java-6-sun

5) update rest of configs.
Snapshotted conf.empty to ~/config/hadoop/conf, and started making edits, as per doc 1. Symlinked into /etc/hadoop/conf

files available at document #3, my github config project, hadoop/conf subidr.

6) switch to hadoop user
sudo -i -u hadoop

7) initiale hdfs (as hadoop user)
mkdir ~hadoop/tmp
chmod a+rwx ~hadoop/tmp
hadoop namenode -format

8) fire it up: (as hadoop user)

hadoop@mini:/usr/lib/hadoop/logs$ /usr/lib/hadoop/bin/
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
hadoop@mini:/usr/lib/hadoop/logs$ /usr/lib/hadoop/bin/
starting namenode, logging to /usr/lib/hadoop/bin/../logs/hadoop-hadoop-namenode-mini.out
localhost: starting datanode, logging to /usr/lib/hadoop/bin/../logs/hadoop-hadoop-datanode-mini.out
localhost: starting secondarynamenode, logging to /usr/lib/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-mini.out
starting jobtracker, logging to /usr/lib/hadoop/bin/../logs/hadoop-hadoop-jobtracker-mini.out localhost: starting tasktracker, logging to /usr/lib/hadoop/bin/../logs/hadoop-hadoop-tasktracker-mini.out

8) Check that it is running via jps
hadoop@mini:/usr/lib/hadoop/logs$ jps
12001 NameNode
12166 DataNode
12684 Jps
12568 TaskTracker
12409 JobTracker
12332 SecondaryNameNode

(note to self, why don't we have hadoop completion in zsh? Must rectify)

9) Run example. See doc 1:
hadoop jar hadoop-0.20.0-examples.jar wordcount gutenberg gutenberg-output

hadoop@mini:~/install$ hadoop jar hadoop-0.20.1+152-examples.jar wordcount gutenberg gutenberg-output
09/12/25 23:24:19 INFO input.FileInputFormat: Total input paths to process : 3
09/12/25 23:24:20 INFO mapred.JobClient: Running job: job_200912252310_0001
09/12/25 23:24:21 INFO mapred.JobClient:  map 0% reduce 0%
09/12/25 23:24:33 INFO mapred.JobClient:  map 66% reduce 0%
09/12/25 23:24:39 INFO mapred.JobClient:  map 100% reduce 0%
09/12/25 23:24:42 INFO mapred.JobClient:  map 100% reduce 33%
09/12/25 23:24:48 INFO mapred.JobClient:  map 100% reduce 100%
09/12/25 23:24:50 INFO mapred.JobClient: Job complete: job_200912252310_0001

hadoop@mini:~/install$ hadoop dfs -ls gutenberg-output
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2009-12-25 23:24 /user/hadoop/gutenberg-output/_logs
-rw-r--r--   1 hadoop supergroup      21356 2009-12-25 23:24 /user/hadoop/gutenberg-output/part-r-00000

It Lives!

Wednesday, December 23, 2009

Perl Advent Calendar(s)

It's that time of year again! Ok, I'm 23days late for the start, but I'm still within the valid range. Perl Advent Calendars!

I just stumbled across RJBS's Perl Advent Calendar. The article for Today (2009-12-24) is on Email::Sender::Simple. This is a great module. Seriously, if you are doing any email sending, this is what you should use.

I found and used this module over this past summer, while updating some legacy php code that included an emailed report. I was not looking forward to reimplementing the email sending routines in my perl version, but then I found Email::Sender::Simple and it is (as you'd expect from the name) super simple. Unless you want it to be more complex, and then it'll be more complex.

This whole advent calendar is pretty cool, because it is all stuff he's been working on. A brief moment of webstalking, and now I know who RJBS is. He wrote jgal, and igal clone, way back when. I should have an email around here from when I first switched from igal to jgal. That must have been a long time ago... it was, among other things, PRE-FLICKR! Hmm, scratch that, seems I was confusing jgal with jigl, both of which were igal inspired. I did just find an email from Oct 2003 personally announcing jigl vs 2.0 (jason's image gallery).

Some perl advent calendars:

Saturday, December 19, 2009


In a fit of inspiration, I flipped through my videos on my XBMC this morning and realized that I still have a copy of "MIT: Introduction to Algorithms" recorded in 2001. This morning I've been watching episode 3, "Divide and Conquer", recorded 9/12/2001.

It's interesting to see the people reacting to 9/11, when it was still a "great tradgedy" before it became "terrible attacks" when the call was "Feel this pain, yes, but get out and keep making the world better. That's what we must do after an event like this." And then right back into math and CS. I'm glad they didn't have any vinette on "divide and conquer" as practised by the Romans or British (I see that they did make a comment like that in the 2005 lesson).

This is a nice refresher. It's been a long time since I've formal analysis of algorithm run-times beyond "back of the envelope" estimations. If the cost of splitting a problem in two is small compared to the speed/complexity improvement of doing a half-size problem, then this is a win. See also map-reduce et al.

Perhaps it is also time to go through "Algorithms in Perl" and update it for "modern perl"isms? That'd make an interesting blog thread.
A recording from 2005 is available on and on youtube.

Peteris Krumin watched and blogged about all of these episodes. In fact, I think he's the one who transcoded them from realmedia and posted them on Thanks for releasing these under a CC license, MIT!

Friday, December 11, 2009

Dec Perl Mongers was a blast

We'll have the slides up soon along with some Perl+VIM tips. Really, I mean it this time. Do you have some favorite perl/shell/vim integration tips you'd like to pass along?

In the mean time, you can follow along in the git repository. Including the drama where 10 minutes before presentation time I merged in changes from my cohort that deleted 90% of the files, and then I blindly pushed those up to the public master.

You can look directly at the Vroom slides from . I'll render those to html and push them somewhere for viewing.

Omnicompletion actually worked during my talk/demo. I think that's the first time I've ever actually had it work. Totally exciting. While preparing the slides and doing I found that I was missing a configuration option in my vimrc, so my ftplugin/perl directory was getting skipped.

Tuesday, December 1, 2009

plack, is it really this easy?

I just wrote my first plack based prototype, built around Plack::Request. Plack::Request isn't normally used directly, but I'm using it here as a simple prototype to run a feasibility test for a proposed new project.

I took a coworkers existing MyEngine ModPerl2 prototype and refactored it into and Then I reused in listed below. (In between I added some tests for, yes it is all currently boilerplate scaffolding, that doesn't mean it shouldn't be tested).

All this server has to do is take an incoming request, pass it to a function from the (business)Logic module to get a list of possible responses, then call a helper from the Logic module to pick the winner then finally return the winner to the caller via JSON. Picking the possible responses can be time consuming and will be capped somewhere in the 100-250mS range, none of which is important yet outside of the business logic case. That will get interesting when I look into parallelizing the possible responses code (likely moving from prefork to Coro as the plank server to accomodate this).

Next up I'll benchmark this on a production caliber machine to test for overhead, latency, throughput and max connections. These will provide upper-bound possibilities, since the business logic is mostly empty scaffolding at this point. Testing will be against both Standalone::Prefork and Coro plack server backends.

Running the server via Standalone::Prefork server:
plackup --server Standalone::Prefork --port 8080
Running the server via Coro server:
plackup --server Coro --port 8080

Saturday, November 28, 2009

los angeles perl mongers : next meeting Wed, Dec 2 site updated with the date and topic of the next perl mongers meeting, which is this coming Wednesday. How is it December already?

We'll be doing a group discussion of perl+vim, (wonder twins activated). Bring your favorite tricks and config files. This can be anything from using vim tricks to make programming perl smoother or using the perl interpreter inside of vim, to programs used to prep data for vim (tags and etc) to screen and shell tricks.

I've got a github repository started for configuration files. Send me some git changes to my files or copies of your own before the meeting. Afterward I hope to have repo updated with all the contributed configuration files as well as a meta file of the ones people liked.

I made a previous post on perl + vim notes. There are few things there listed under "stuff I'd like other people to show me," if you use/know them please consider coming prepared to talk about them.

Monday, November 23, 2009

Acer 1810T unboxing

My new netbook, Acer 1810T, just arrived from Amazon. I unboxed it this morning.

This machine is lovely. It's everything I had hoped for from when I bought an Acer Aspire One (750) three months ago. The Aspire One had such a nice form factor and it was so quiet. But the Poulsbo video chip was unusably slow under linux. The single hinged button on the track pad drove me batty. I was able to take it back and wait for the new models, and now they're here!

The 1810T has two real buttons on the track pad. Video is updating quickly (I haven't tried video yet) and it just feels snappy. I didn't realize how much heavier this 6 cell battery would be than the 3 cell in my 750.

I am slowly getting it to fully working state. I've done the win7 30 minute post-install install step. Then removed various pieces of factory installed trialware.

I then popped in my Ubuntu Network Remix (UNR) usb-key. Only to find that my usb-key is dead. After a couple of hours of downloading I had a new 1G image ready for my other usb-key. After booting into UNR I repartitioned and installed. I'm now giving it a whirl. Video under USB seems nice.

My first attempt at suspend worked and got me back to the login prompt during restore, but the box kind of freezes while "Checking..." for long periods of time. I'll look at this again after doing an 'aptitude update&&aptitude upgrade' to see if there are have been any updates from the ISO. I was hoping to also test hibernate before upgrading. With my previous AspireOne, I was able to get hibernation working but not suspend (although I had suspend working before upgrading the video drivers).

Update: Much improved under linux after two changes. inspired by this thread. I added 'libata.noncq=force' to my grub configuration (under the default linux options) and removed the acer-wmi kernel module by adding it to the autoload blacklist. I aslo decided to reinstall to Ubuntu from Ubuntu Netbook Remix. I'll be running Ion anyways, so no need to have the UNR interface (especially not with all these glorious pixels!!).

Update 2: This How to: Karmic Koala on Acer 1410 / 1810T/ 1810TZ also looks interesting. It just popped up.

For 600 bucks, it isn't exactly in the impulse-buy category, but it sure is lovely. If you can find one!

Saturday, November 21, 2009

Hilbert Space notes

DDJ journal article on space filling curves
example code aa799.txt from the zip file.
Google's Uzaygezen project
Geo::Hash perl module, which has a nice clean implementation
GeoHash uses Z encoding.
The DDJ article does a nice job comparing the Z-curve and Hilbert curve. They're both nice ways to convert 2-space to 1-space, which is easier to search via b-tree.

A downside of the standard Hilbert curve is that the x and y space must be the same size. The standard procedure is to inflate the smaller dimension and then produce a curve for the full space. This is wasteful of space and processing time. Compact Hilbert Indices are an idea to make a modified form that doesn't require equal dimensions. That paper was published in 2007, and other than Uzaygezen doesn't seem to have been cited/used yet. So let's get started!

Tuesday, November 17, 2009

Recent Tech Event Reviews

Last Tuesday was the monthly Hadoop meetup[1], nearby at Mahalo.
This event was a beginner friendly event, walking through getting started with hadoop on EC2. There were 20+ people and the speaker did a great job of sharing his knowledge. He or the facilitator could have done a better job cutting off rambling or unrelated questions. Jonathan and I both attended.
Nov 14-15, barcampla[2]
Postponed. BarcampLa is going back to a once-a-year from twice-a-year.
Wed 11/11, Thousand Oaks Perl Mongers[3], at ValueClick up in Westlake.
Discussion format on perl for systems administration. A wide ranging topic that was more on the basic comparisons of compiled vs interpreted languages, why ruby and python are gaining in visibility in the SysAd world and related items.
Wed 11/04, Los Angeles Perl Mongers[4], here at the Rubicon Project.
Delayed by a week due to illness. Tommy Stanton did a great presentation on Test Driven Development, using Test::Class, Test::More and Test::Most. After his presentation we had a round table discussion and some eXtreme Pair Programming as 10 of us "helped" him write code on the big screen.
Mon 10/27 was SpeedGeeks LA[5], hosted just down Olympic at ShopZilla.
I attended the keynote before work and caught the speed round during lunch. Mike D arrived after his flight and caught from 11am through the speed round. Steve Souders (of O'Reilly's Velocity conference) co-hosted and did a great job setting the keynote. The premise was that more and more with modern sites the optimization needs to come through the UI layer as the back end is already been squeezed pretty tight. His blog should eventually have slides, here's his recap [6].
Thu 10/15 Mindshare Masquerade. [7]
Did not attend, release issues.
Tue 10/13 Hadoop meetup. [1]
Hadoop Desktop overview from henry@Cloudera. A nice talk that got more technical than was probably appropriate (but in a fun way). It rained that night, delaying the presenters flight by 4 hours and the presentation by 70 minutes, so there was a thin crowd. Jonathan and I both attended, I don't know that we got much out of this one. But it sets the stage for when we come back to ask questions.



Upcoming Tech events

This week:

  • This weekend, Fri-Sun, is Startup Weekend [1]. At Blankspaces, 5405 Wilshire.
  • Tonight is an intro event, What to expect from Startup Weekend [2]. (ticket sales just ended.) 7-9pm. Update: Turned into a webinar, from 7-8pm.
  • Wednesday is[3], which is not strictly a tech event, but has a strong crossover.

Next week:

  • Thanksgiving. I'm out of town, so I don't know what's going on locally.

Week After:

December already!
  • Wed, Dec 2, Perl Mongers[4] here at the Rubicon Project. Our website isn't up-to-date, but this meeting will feature VIM+Perl tips and tricks, as a presentation and open discussion and config swap.
  • Tue, Dec 8, Hadoop Meetup[5]: topic not yet announced.



Wednesday, November 11, 2009

perl + vim (notes)

On the email list, we had a topic this morning on ideas for tonight's meeting, so I told them about what we'll be doing for the Dec 2 topic -- an interactive night of Vim + perl tips : improving the editor experience, improving the coding experience.

My reply got pretty long, so I'm going to pull the notes out here.

The idea: an interactive night of VIM+perl tips. Bring your vimrc's and links to your favorite modules and plugins. Perl stuff that makes VIM better, VIM stuff than makes editing perl better. A lot of my stuff is circa VI, but it is still sweet.

Tips I'll bring:

  • setting up vim to use quicklist / make functions on your perl. :make
  • integrating perltidy
  • my perltidyrc
  • ctags integration
  • keyword completion
  • a couple of vim plugins
  • examples of vimdiff and svn+vimdiff integration
  • splitting windows
  • related: zsh (the one true shell(tm)) integration.

Stuff I'd love to hear about:

  • using embedded perl in vim. perldo and friends.
  • omnicompletion
  • Your Favorite file browser
  • Your Favorite vim plugins
  • Info about the Vim:: modules on CPAN.
  • updated VIM perl syntax highlighting description, now maintained by petdance/andy lester. announce git issue-tracker group
  • using ACK and using it from inside vim(Tommy, examples please)
  • perlciritc integration via perlchecker.vim
  • iskeyword settings ( set iskeyword+=: , etc)
  • perl-support.vim -- seriously, does anyone use this? I'm not sure I see how it would be helpful. If you do, I'd love a visit from the clue-train.

I only got as far as searching VIM:: on cpan. Next up, mining the perlmonks (and updating, of course) -- the vim info there is old, and some of it is wrong (meaning of a couple flags were flipped between vim6 and vim7).

Saturday, November 7, 2009

XBOX! Welcome to 2002!

I've finished an item from my list of projects. Last night I converted my xbox via softmod and added XBMC. Welcome to 2002!

I mostly followed this guide using a USB key and an xbox to usb adapter that I bought from Suntek looked sketchy and their search interface is terrible, but the product totally arrived when expected, worked great and was a great deal -- $4.82 with free shipping from Hong Kong. Scott had included both Mech Assualt and the Tom Clancy game, two of the three exploitable games for softmodding. One thing left out in that guide -- hook the USB key up to the xbox first for formatting.

I already have my readyNAS NV in the closet serving up my mp3s -- it's taken a long time to rip all the CDs we own, but they're pretty much all on there. Now I'll need to rip my DVDs too -- way easier than walking over and putting a disc in the player.

Thanks for the great birthday present Scottie! It is as awesome as I thought it would be. Glad I already had the NAS setup, makes this part a breeze.

What dreams may come ... must give us PAUSE

People's CPAN ids came up again at the Mongers on Wednesday. We had several in the crowd: gray, Pip, and Beppu. This was Pip's first visit to since we've started back up.

I'd like to get a PAUSE ID, so I too can post to CPAN. But the first question is always "what modules are you going to post?" and if I knew that... I'd have them done by now. Maybe the module I'll write for the January mongers will need a public home.

And with that, I should really be getting off to sleep, perchance to dream.

To sleep: perchance to dream: ay, there's the rub;
For in that sleep of death what dreams may come
When we have shuffled off this mortal coil,
Must give us pause: there's the respect
That makes calamity of so long life;
For who would bear the whips and scorns of time,
The oppressor's wrong, the proud man's contumely,
The pangs of despised love, the law's delay,
The insolence of office and the spurns
That patient merit of the unworthy takes,
When he himself might his quietus make
With a bare bodkin? who would fardels bear,
To grunt and sweat under a weary life,
But that the dread of something after death,
The undiscover'd country from whose bourn
No traveller returns, puzzles the will
And makes us rather bear those ills we have
Than fly to others that we know not of?
-- Hamlet by Billy Shakespeare.

Update: it says "what are you going to contribute", that's a whole other kettle of fish. This looks like a test I could pass.

Tuesday, October 27, 2009

LA Perl Mongers: Pushed back a week

Our next L.A. Perl Mongers meeting has been pushed back a week.
Instead of 10/28 we'll now meet on Wednesday, November 4th.


  1. Tommy Stanton: Testing for the Win! Test::More, Test::Most and Test::Unit.
  2. Andrew Grangaard: either "Care and Feeding of third party modules, revisited -- a local::lib example" or "Hadoop with Perl"

Wednesday, October 21, 2009

Rakudo October Release -- aka Thousand Oaks

Congrats Aran, Shawn and Todd!

Exciting news. Jon has decided to name the October Rakudo release after the TO group, due to the excitement and buzz from our perl6 hackathon.

Rakudo Perl follows a monthly release cycle, with each release code named after a Perl Mongers group. The October 2009 is code named "Thousand Oaks" for their amazing Perl 6 hackathon, their report at, and just because I like the name :-)

Thanks for inviting me to the perl6 hackathon, that was loads of fun. Crazy to remember a time when I could spend 8 hours on a Saturday doing fun hacking and not working... Looking forward to doing it again, maybe hosting down here in Santa Monica (but it's hard to hack when the beach is right there, just calling). Maybe this time I'll even write some perl6. (though pir was fun too).

Subject:Rakudo October Release
From:Jonathan Scott Duff <perlpilot@<elided>.com >
Date:Tue, 20 Oct 2009 21:42:54 -0500
To: Andrew
Hello there, I'm handling the Rakudo release for October in a couple of days and I wanted to let /someone/ from know that I've chosen Thousand Oaks as the code name for this release for two reasons:

1) You guys held a Perl 6 hackathon TO++
2) I just like the name "Thousand Oaks" :-)

Your blog was linked in the release document, so you're it for me contacting

I realize I probably could have just emailed TO's mailing list, but for some reason that didn't sit well with me. It felt as if it would be too abrupt. In any case, feel free to share the news with the rest of (or just wait for the release annoucement if you want :-)

Anyway, cheers,

Jonathan Scott Duff

Monday, October 19, 2009

Cowboy culture.


What is a cowboy, in this context? He's writing scripts, not programs not software. He's someone who runs off on his own, thinking he understands what his is working when he is really cargo culting and making guesses and assumptions. Loves making changes live in production and uses phrases like "I think it should work, right after this patch." Patching things up in a belt-and-suspenders way that obscures the original logic or business case. They have yet to see the light of testing (at all, let alone automated unit testing with high coverage), check things into source control after-the-fact which may or may not match what is in production (which well may vary from machine to machine). And "hey, this 6000 line function should basically work, most of the time, but when it doesn't I have errors sent to /dev/null in the crontab."

Let's say you're a cowboy or you work with one. How do you survive? Adapt? Reform?

Blast off and nuke it from orbit, its the only way to be sure.
If only we could just nuke it. But that's not going to happen until you have a replacement. The fact that the replacement is cleaner and more understandable won't be enough unless it also faster and provable more correct and quite likely it'll need to be bug-for-bug compatible with the old code.

What's the problem with just rewriting it all from scratch? First, code archeology time. When dealing with legacy code, that is spaghetti and undocumented and untested, you will spend most of your time figuring out what it does and why. Is it important that this test short circuits? Why is most of the logic up here, but some is down there? Does the sort order of this array of keynames matter? Are there implied dependencies?

Now, the only reason you're in there is that something is broken, and now someone has allocated some time and resources to fix it, firefighting style. Otherwise, no one wants to go near that code. Especially given that the code owner seems to be putting in herculean efforts to keep it running (staying up late most nights handling error escalations that ring his pager at 4am). Of course, it also probably terribly important code to the business (it handles logs or stats or something similar in the direct money path) so it HAS TO BE DONE! OMG OMG THE SKY IS FALLING FIX IT RIGHT NOW!

Resist the urge to just jump in and make that one change. Yes, someone will be yelling "why isn't this ready" and you'll have to be firm on your reply that you can't know that simple change won't affect the whole system negatively in a chaotic system. Blind refactoring my just further hide the business logic and cause you to rewrite and obscure current bugs. You have to assume there are other problems than the one you are fixing, just no one has noticed the others (or noticed and failed to report). You don't want to be the one called at 4am because your simple fix took payroll offline when the 3am job kicked in and was expecting some old, broken format.

I recommend a two fold approach. First you start with a light bottom-up refactor. Trim the lexical variables down to their minimum needed scope, and change the seven $t variables into useful names that match their scope (big scope == big more descriptive name). Pull blocks of that behemoth function into smaller blocks. Find a test case that exercises the important features, and make sure the original code runs on it reproducibly, giving the same output each time. Really start with this test case. I've been burned so many times chasing my tail because my test output doesn't match the original, due to some random or non-causal output from the original code. Then find any external dependencies (the database, time-of-day, phase-of-moon) and start thinking about how you'll test them.

Once you have that done, start working top-down. You can't do this step until you've been a bit steeped in the code as you won't know the right questions to ask the business sponsors. You have to know how it does things without letting it set your mind-view of how things will be done -- a fine line to tread. Now we look at the problem and the problem domain and wonder if this approach is still valid, given the way the data and data model have changed. Can you use bring some patterns into the code, separate out pre-processing and report definition and number crunching? What can you learn from the evolution of the old code, over multiple passes of tweaks and updates about what we've learned from the business? There is value in that knowledge, if you can separate the wheat from the chaff, the important changes from the incidental, accidental and cargo-cult-copy-and-pasted changes. Build your modules from the ground up to be reusable and modular and yet designed for the business case that the current script handles. Test as you go, you'll be so much happier.

Now, you have your middle rewrite written. It has some of your top down and all of your bottom-up changes. Test it against some minimal output, comparing with the old script. You're going to be running this test a lot, please write a script to automate it. You'll thank yourself later which is nicer than cursing yourself out later because that 1/2 hour test has gone wonky because you broke your shell command history. Now you'll be adding bug-for-bug compatibility, to make sure your code produces output to match current production. Add those bugs, really. And then document them in your bug tracking and make sure they really are bugs and not "oh my goodness, of course I need the fact that the reports come out sorted by the third character of the report name" expected functionality by someone.

When they match, make the switch. Now there should be no visible difference from the outside. But now you can go to town on your in-between scaffolding code. That's why you put in all those unit tests. Now you can hack up chunks of internals and know you aren't affecting the eventual output. Soon you'll have a business critical chunk of software that won't call you for help at 4am, a program you're proud to have rescued from it's prior life as a "script written by a cowboy."

Update: Todd sent me links to two of his cartoons from asciiville, from when he was dealing with a "slew of these cowpies".

BTW: I loved the Cowboy Culture post. I had to fix a slew of those cowpies about a year ago, and I drew a couple of toons as a release. I thought you might enjoy these as we are on the same wavelength with respect to cowboy coding.. :)

bedtime stories

the new recruit

Monday, October 12, 2009

Congrats on the new book, Wei-Hwa!

Wei-Hwa Huang has a new book out, Mutant Sudoku. As one of the world's top puzzlers, I'm sure he'll bring an interesting take on the game. Congratulations on your new book!

I remember when he first introduced me to Sudoku via a terrible pun involving a certain Count from Star Wars, about 2 years before the US sudoku craze began. I really didn't appreciate the quality of the joke until much later, and now I can't seem to find that quote in the gale logs. I did find a mention of the sudoku t-shirt he made in silk screening at Caltech, circa 1996. So clearly he's been aware of these puzzles for a long time.

My sophomore year I took the Putnam Exam, and was happy to emerge from the test alive, I think I even got a few points of partial credit. Wei-Hwa, as a frosh, did amazingly well. Now that I look it up, I see he was a Putnam Fellow in 1993. Top 5 in the country. Wow.

Seriously, how many people do I know with their own wikipedia entry?

Sunday, October 11, 2009

LA perl mongers October update and September recap

September's perl mongers meeting was awesome. We had two presentations (both from me!). The meeting for October will be Wednesday October 28th. The first presentation was an example of getting work done using perl. Specifically using JIRA::Client (a thin wrapper around SOAP::Lite) to access a JIRA bug tracking installation to pull bug counts. Slides included fully working example code. The author of JIRA::Client commented on the blog post. That is so exciting! That's what community and social coding feels like.
Nice example. It inspired me to make it easier to get from filter names to filter ids. I just released JIRA::Client version 0.16 which implicitly casts filter names into filter ids in the calls to getIssueCountForFilter and getIssuesFromFilterWithLimit.
-- Gnustavo
The second presentation was a discussion of "care and feeding of third party perl modules." We started with my blog post and went around discussing what approaches people had tried, which ones they liked which ones they found lacking. Tommy was kind enough to run the video camera for part of the discussion, so once I transfer the tape, we'll have something to upload.

Some of the main points to come out of the discussion were: the importance of staying up-to-date, of having unit-tests of the features you expect and use from external modules, green field testing (make sure you can build it all from scratch in your test environ). The need for a company to institute some sort of revisioning on top of CPAN came up a few times.

Some novel ideas included: source control hooks that check for external modules used in a given commit and updating all those modules to current release, requiring the programmer to verify that his/her checking works with current modules; a single repository for third party modules and other code, that can be easily pulled into any internal project or repository (local::lib helps here); considering what is pushing you to be "up-to-date" as maybe you don't actually need it (sacrilege!)

Some related tasks that need to happen soon: upload the video and notes from the second presentation before the October meeting. Update the website with the October date Edit:done. Put up the slides for the JIRA::Client talk. Find a speaker for October ( Tommy signed up, but then had to defer to November). Find a November date to work around Thanksgiving. Decide if we need to cut back to a single speaker (or two speakers every two months).

Tuesday, October 6, 2009

risks and mistakes

People who don't take risks generally make about two big mistakes a year. People who take risks, generally make about two big mistakes a year.
-- Peter Drucker

This was our quote of the day yesterday at work. Serendipity that Mallory would pick one of my favorite quotes on the one year anniversary of my coming to work for the Rubicon Project.

I think I've made my share of mistakes over the past year -- mostly from not taking big enough chances not adapting and changing quickly enough. Something to think on during the coming year.

Sunday, October 4, 2009

vimdiff ... where have you been all my life?

I'm finally giving vimdiff a try. I normally get by with diff, diff -u (unified) and diff -u -w (unified, ignore whitespace) and their ssh equivalents svn diff (unified) and svn diff -x -w (to ignore whitespace).

I know vimdiff exists, and have used it trivially once or twice. But never felt a need to dive in.

Right now, I'm looking at a file of a coworker's modified code, trying to figure out which version it originally corresponded to. I've looked at the diffs, and I think I know which version it is, but I was having trouble comparing the lines to see what some of the diffs mean.

I popped it up in vimdiff vimdiff file1 file2 and now I have a lovely side-by-side view of the two files, with coupled scrolling. Chunks of unmodified code are folded and out of the way. The vim folding commands work normally: zo will open a given block if I need to see that code and zc will close that block back up. zA to open all and zC to close all.

The normal vim window/frame commands can be used to switch between the two frames. Since scrolling in either file scrolls both files, there isn't a big need to switch between the frames, except when examining long lines. By default, line-wrap is off for the diff, so long lines appear truncated until the viewport is scrolled. control-w w will switch between the two frames. Jump to the end of the long line with $. Again, both frames will scroll together left/right just as with up/down. Alternatively, :set wrap in command mode will enable word wrapping, this needs to be done in each frame independently. If literal tabs are making your lines too long, try displaying tabs as smaller entities: four character :set ts=4 or two character :set ts=2. Again, this must be applied to each buffer independently.

I really like the intra-line difference highlighter. The whole line is highlighted, but the changed portions are highlighted in a different color. Purple and Red respectively in my setup. That helps me pinpoint the exact character changes, so I can focus on see the "why" of the change instead of digging for the "what".

vimdiff and svn is not an either/or proposition. svn allows an external diff command, via the --diff-cmd switch. Unfortunately, vimdiff doesn't work out-of-the-box with this flag, as svn adds extra information when passing to the diff program. A have a very short wrapper called vimdiff-svn-wrapper that I use to drop the extra arguments. I have this in my path and use svn diff --diff-cmd vimdiff-svn-wrapper filename to run svn diff on filename, displaying the output in vimdiff.

On the other end of the spectrum is svnvimdiff from This runs vimdiff on the output of svn diff. It's messy the way it uses temp files and I just tried the version I downloaded last year it didn't work for me. I've just written a new version. Had I checked the link, I'd see the original is on version 1.5 and I have version 1.2. My version uses the vimdiff-svn-wrapper with svn diff --diff-cmd. I have directly copied his clever method of getting the modified svn files by parsing the output of svn stat.

Time to get back to figuring out the changes in his code...

Dear Moose

Dear Moose,

Just a quick note to say, "Thanks for being awesome!"

Hanging out with you both this weekend was awesome. I love the little test-driven proof-of-concept program I wrote with you guys. It was a blast!

I wish I could share the code with my other friends, but you know it is Antony's project and he's a bit paranoid that his idea will escape into the wild. Normally, I think that's a silly attitude, but in this case I understand. His project is definitely not something I'd thought of or considered previously and after a first stab of research it still seems novel. More importantly, after he described it I couldn't stop myself from blurting out: "I WANT ONE!" Which is a good initial reaction for a consumer product.

I had ideas for how to build a simulator for the concept bouncing around my head all week. I finally got both time and energy together on Saturday night. Moose, making objects with you let me focus on the features rather than the boiler plate. 10 has lines later and I had a loadable module. These should be Ints, these should be Bools, this one can't be changed, I told you and *poof* my module had input verification. Very slick. Then I got started writing tests for new features, so I could write the features.

Write test, fail test, write feature, pass test.
TDD, it seemed so strange when you first said it, but I'm starting to get it now. Its getting to be a real rush to test the code and see it pass. I'm glad our buddy VIM was there to help, :make for the win.

The design is simple enough that refactoring becomes more obvious. With the minimal boilerplate overhead, it was easy to pull my messages out into first class objects. Maybe that will eventually become a role?

Here's a look-alike of the test I wrote to verify that I could build the objects and get a message object to pass between them.

Looking forward to hanging out with you again soon. Take care!


Thursday, October 1, 2009


I used local::lib this week. Wow, it really is a nice way to install cpan modules into my source control tree, to build an app-specific perl5 lib.

Tuesday, September 29, 2009

Book Review: Ace on the River

I just finished (Aug 28th listening to Ace on the River, by Barry Greenstein. Read by the author.

This is a very interesting look into the life of a professional poker player. It is more a memoir than an "Advanced Poker Guide". It does include a lot of information on playing the larger game -- building a life while playing poker.

Thinks I liked:

  • Talking about the difference between tourney and cash game styles. Especially about how the cash game people look down on the tourney players, yet are jealous of the "respectability" afforded the tourney players.
  • The last section walks you throw several real tourney hands, and he walks you through how he played them, and what the optimal plays were. On the audio version, there are pauses to force you to think about the situation and make your own answer before he tells you his thoughts.
  • While tourney payouts are top heavy, so it's best to win. it is important to guard your chips more than a cash game, since there is no rebuy. This doesn't always mean cautious, tight play. Really it means evaluating where your stack is relative to the big blind -- and in a tourney the blinds are always moving up so they are a big chunk of the stacks. It may well be that you take a big chance when you're below 8 big blinds, because you've got to take it to stay alive.
  • Winning early hands gives chip power. Use your big stack "as a bludgeon" on your opponents
  • Conversly, realize the power of the short stack (more useful in a cash game). Easier to get odds to double you up. In a tourney, it's still nicer to have the big stack!
  • The emphasis on maintaining a life outside of the game.
With a forward to Doyle, how could you not be interested?

Sunday, September 27, 2009

Business Motivation

An important message and a beautiful example of design.
Enlarge the image to see the full message.

Intrinsic > Extrinsic

Found this on Dan Pink's blog.

Wednesday, September 23, 2009

LA PM tonight.

I'm Looking forward to seeing you tonight at 7.
1925 S. Bundy, LA, CA, 90025.

I'll be your speaker tonight, as our originally planned speaker has an injury and can't make it. I've put together a presentation and a Q&A discussion.

* Accessing the JIRA SOAP api using JIRA::Client.
JIRA::Client is a thin wrapper around SOAP::Lite that provides a few additional convenience methods. I'll compare and contrast a perl and ruby implementation of a simple script to query bug counts.

* Care and feeding of third party Perl modules at work -- how do you do it?
I've asked a few of you this question, and heard a variety of answers. I think it'll be interesting to discuss the various strategies that are out there. How is this not a solved problem that everyone knows? (Or am I just out of the loop?)

I have blog posts on both of these, if you'd care to comment before the meeting: access-jira-api-from-perl-with

Info, directions and parking map:

Tuesday, September 22, 2009

cryptic git status: "failed to push some refs"

A first stumble with git: error: failed to push some refs to ''.

And here's how I fixed it.

[agrangaard@willamette]% git fetch                               129 ~/examples
[agrangaard@willamette]% git push                                  0 ~/examples
 ! [rejected]        master -> master (non-fast forward)
error: failed to push some refs to ''
In my case, this meant I had a merge conflict. git status showed no changes and a git fetch wasn't listing an action nor listing any changes.

Once I realized I needed to do a git pull, more specifically a git pull origin master, the merged code was pulled in.

[agrangaard@willamette]% git pull origin master                    0 ~/examples
 * branch            master     -> FETCH_HEAD
Auto-merged jira.conf
CONFLICT (content): Merge conflict in jira.conf
Automatic merge failed; fix conflicts and then commit the result.
[agrangaard@willamette]% git status                                1 ~/examples
jira.conf: needs merge
# On branch master
# Changed but not updated:
#   (use "git add ..." to update what will be committed)
#       unmerged:   jira.conf
no changes added to commit (use "git add" and/or "git commit -a")
[agrangaard@willamette]% vi jira.conf                              1 ~/examples
Since there was a manual merge needed, I opened jira.conf and cleared out the diff. Then I committed it locally and in the master repository:
 [agrangaard@willamette]% git add jira.conf                         1 ~/examples
[agrangaard@willamette]% git status                                0 ~/examples
# On branch master
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#       modified:   jira.conf
[agrangaard@willamette]% git commit                                0 ~/examples
Created commit e69f257: Update jira.conf
[agrangaard@willamette]% git status                                0 ~/examples
# On branch master
nothing to commit (working directory clean)
[agrangaard@willamette]% git push                                  1 ~/examples
Counting objects: 13, done.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 1.71 KiB, done.
And now my local check-out is in sync with my remote repository at github. Woo!

Care and feeding of third party perl modules at work -- how do you do it?

How do you install and maintain third party perl module dependencies in your work perl code base? How about your own perl modules?

I'll be hosting an open discussion on this tomorrow at the Los Angeles Perl Mongers meeting tomorrow. (Wed 2009-09-23). Let's see what we can dig up.

I've been at a few perl companies now, and they all seem to have taken different approaches. Yes, TIMTOWTDI, but TIMTOWTDIWrong aka TIMTOWTFIU. This seem like it should already be a "solved problem," so what are the best practices in this arena?

Popular memes in this space:

  • System Perl
    • Install to system perl via cpan. Upgrade whenever.
    • Install to system perl via package manager, deb or rpm. Manually roll unpackaged cpan modules through a tool like cpanflute.
    • Sub part 1: install those rpms/dpkgs manually
    • Sub part 2: run a local yum/apt-get/dag repository internally to pull "trusted" internally packaged cpan modules
    • Install via system tools (puppet, etc) live on each box from a configuration file of needed modules.
  • (shared) Application Perl, compile your own single perl version for internal applications
    • Install into the application perl via cpan
    • check into local source control source for third party modules, install during application install.
    • check into local source control binary version of third party modules.
  • Install into local library, using Local::Lib with either application or system perl.
  • Install from a "mini cpan", maintained internally and populated with vetted, approved third party modules.
How do you do it?

If you're just installing a handful of modules, the ad-hoc approach may be sufficient. Right now I'm writing some new apps with Moose, and installing that module dependency chain seems crazy without figuring out a plan. With Moose and the MooseX namespace there are probably 20+ modules I want to install, and many of them are improving and releasing rapidly. Edit: Sorry Dave, you're right, not hundreds of modules. I didn't mean to spread Moose FUD, I love the Moose

Plan: flesh out this page with links to the current wisdom on these matters, from fellow iron mongers, perl monks, etc. I know I've ready half a dozen recent articles on this, time to find and organize them.

Access the JIRA API from Perl with JIRA::Client

Quick problem: I want to know the bug count for the upcoming release, which encompasses two jira projects. I'd like to know the count of open, closed and resolved bugs for each of the two projects and combined. I have created several stored filters that show me the bugs/issues/tickets in each category, and a count.

Because the issues are in multiple projects, combining them in one search and filtering by closed status is daunting directly in Jira.

A quick CPAN search later and I had JIRA::Client installed on my workstation. After jumping through a few hoops I got the JIRA admin to enable SOAP API access. API access is installed but disabled by default.

The hardest parts was finding the correct API docs for our version of JIRA. I didn't think 3.11 to 3.13 would be a big difference, but it was.

Tonight a threw together a quick script to grab the bug counts across each of my 6 saved searches and return the bug count. It started out looking like this: Eventually I prettied it all up with flags, config files and documentation and pushed it to my github repository named "examples":

Is there a way to embed a file from my github the same way as I can for gists from

Pair Programming with gnu screen

Good ole' screen, nothing beats screen
-- bart simpson (almost)

Setting up a screen session for sharing

  1. screen needs to be setuid root for multiuser sessions to work
  2. set the multiuser flag to on
  3. use acladd or aclchg to add acl rights for other users. This must be done after enabling multiuser.

make screen setuid:
ls -l $(which screen)
sudo chmod +s $(which screen)

Note in zsh this could be simplified by using = to replace the $(which ) block.
ls -l =screen;
sudo chmod +s =screen

Update: On recent ubuntu installations (9.04 and newer), /usr/bin/screen is a wrapper around /usr/bin/screen.real. /usr/bin/screen.real is the file that needs to be made setuid.

sudo chmod +s /usr/bin/screen.real

Note: I'll use "^A" to represent the screen key.

^A:multiuser on
^A:acladd usertwo
Now usertwo can see the screen session of userone with the following command (The trailing slash is important!)
  screen -ls userone/

And connect with either of:

  • screen -x userone/
  • screen -x userone/name-of-session
In the latter case, name-of-session is used as a search string within the available named screen sessions.

If you used acladd, then usertwo comes in with full capabilities -- write and execute. It is possible to restrict users to write bit (allows sending chars to the screen, assuming no one else has write-lock) and execute bit ( allows screen commands to be run ). These commands can be granted on a per-screen and per-user basis.

Check out more on the screen man page.

Now, you and your remote pair can get back to work. One of you coding, one of you watching. Both of you creating.


  • try and get this set up and running while userone and usertwo are both local so it is easier to debug.
  • you'll need to own your own pty. This is not normally a problem, unless you are using su or sudo to change users, which you may want to do while testing.
  • don't forget to set multiuser to on *before* trying to set the acl commands, I think that was important.
  • Use an out-of-band method for communicating. I like a VOIP connection or even a ytalk or irc back channel. ( I'm still waiting for a jabber based tool that will do all of this for me, all while integrating into vim. )
  • give your screen session a useful name with the -S flag, like screen -S shared or screen -S bug_1234
  • I have vim on one screen, open with my perl module and my .t test (vim -o t/foo.t), and a second screen primed for running the test file. If test output is short, I just run it in the vim session with :make , but for long, slow, verbose tests, I run them in a second window. This lets my screen partner run tests at his/her leisure while I'm typing code.
  • set your shell to auto-update the screen title for a given screen when a command is run.
  • use double-quote ^A" to get a named list of screen windows, choose the window to switch to by number, j/k or arrows. These two make it easier to follow what is going on.

Tuesday, September 15, 2009

Welcome Aboard: Others Online

The Rubicon Project announced the acquisition of Others Online today. Welcome to the team guys! It's nice to finally be able to talk about my new coworkers.

Official announcement

With today’s acquisition of Others Online, Rubicon Project aims to provide its publisher customers a way to find the best data on audience segments from among all those data providers. Others Online, which apparently started out several years ago as a social network play, offers several audience data-related services—in particular an “affinity scoring” service that determines how strongly a person is interested in particular brands, products or topics.
-- Businessweek

Monday, September 14, 2009

Motivation -- Autonomy, Mastery and Purpose

"Autonomy, Mastery and Purpose. It's not Utopian, I have proof."
--Dan Pink.

This TED talk from Dan Pink on the economics of motivation. This is an awesome presentation with good information and a compelling speaker. The thesis: Knowledge work doesn't improve with higher incentive rewards, in fact it may be hindered. Why is there a mismatch between what Science KNOWS and what Business DOES?

This has been replicated over and over and over again for nearly fourty years. These contingent motivators, "if you do this, then you get that," work in some circumstances but for a lot of tasks they don't work or often they do harm. This is one of the most robust findings in Social Science and also one of the most ignored.

The talk dovetails nicely into the book I'm reading Predictably Irrational by D. Ariely, which I would suggest for further reading. As well as Management Rewired, which is next on my reading list.

Two quotes from Ariely used in this talk:

"As long as the task involved only *mechanical* skill, bonuses worked as they would be expected: the higher the pay, the better the performance."

"But once the task called for "even *rudimentary cognitive skill*," a larger reward "led to *poorer performance*."

What would Peter Drucker say? I think he'd approve. Drucker "maintains that people motivate themselves. You can't motivate them; you can only thwart their motivation.To be an effective leader you must recognize that the business you're you're in is the obstacle identification and removal business. "source.

Sunday, September 13, 2009

September LA Perl Mongers Meeting

See you all in a week-and-a-half! Please let me know if you are interested in presenting!

What: Los Angeles Perl Mongers Meeting
When: 7-9pm
Date: Wednesday, September 23, 2009
Where: The Rubicon Project HQ - 1925 S. Bundy, 90025
Theme: Perl!
Food: Pizza and beverages provided.
RSVP: Responses always appreciated.

Our website has been updated with the current info as well as titles of past talks.

I haven’t recovered enough from the previous meeting to write up a recap, but it was definitely off-the-hook. We ranged from co-routines to Moose to Jaeger shots, with the last guests stumbling off at 1:30am after the poker game.


Friday, September 11, 2009


Mallory Portillo wrote:

In honor of the fallen (friends of the Rubicon family):
Todd Beamer
Scott Weingard
John Murray

It is often said that out of great hardship comes strength. Instead of letting the tragic day tear us apart as a country, we came together as a nation and mourned together, helped each other, and shared memories of the event and of the fallen. It is our saddest day but likely our proudest moment. We showed, as a united country, that freedom is a strength not a weakness and in the face of great tragedy we stop at nothing to come together and hatred cannot tear us apart.

Eight years later, we will all likely remember the exact moment when we saw the first plane hit the World Trade Center but more importantly this day will always be a time for us to reflect on what is so great about this nation and its people. Today in the LA office, we are having a pot luck to share food amongst friends and also raise money for a local non-profit. In the spirit of 9/11, please take a moment wherever you are to share with others (even if it’s just a smile on the street or an email to an friend).

To those who knew Todd, Scott and John, my condolences on the tragedy of your loss.

Thank you for sharing this act of Remembrance. None of us are alone in our grief, for we are all united. Let us hope that the legacy of this day will be of a united, strengthened people and a lasting day of Service.

This is my third anniversary back in California and it has felt so strange for this to be a "normal day" for so many people. A day of Remembering puts into stark relief the pettiness of our regular days and reminds me how lucky I am to be loved and living a wonderful life. Mallory, thank you for reminding us why the Rubicon Project is such a special place.

peace and love,

"On a day when others sought to sap our confidence, let us renew our common purpose, let us remember how we came together as one nation, as one people, as Americans united, Such sense of purpose need not be a fleeting moment."
— President Obama, September 11, 2009.

Monday, September 7, 2009

Perl 6 hackathon

Update: Rakudu Release #22 has been named after us!
blog | official announcment

Our ( Perl 6 Hackathon was last Saturday. I had a blast, waking up early and spending Saturday morning tackling a new challenge. Aran & ValueClick were wonderful hosts! Thank you! In theory, our plan was to make a simple DB back aggregation website to test the state of the art in perl6 land. My goal was to show up and hack away, and see what there was to see.

We set up ourselves on a communal account on a shared host. I set up a common screen session which made it easy and fun to follow each others work. There may be new tools for communal editing (subethaedit and the like), but screen has a certain beauty and simplicity that is perfect for this "four laptops in a room" coding style. I also made a local git repository to snapshot our process. The local repo is complete, but I had problems pushing it to github. I'll get that working soon and push it up.

Before the meet-up, Todd had made gains getting sqlite3 to work with perl6 + rakudu. (Honestly I was mostly lost by his birdseed code but it is definitely neat). I spent much of Saturday providing pairing with him to provide assistance getting his stuff working in our shared environment. Frustrating that updates to rakudu knocked out his working code, but he has since moved on to get it working another way.

As you know, Mitch and I were working on the cyanide system. Well, earlier today it ate itself. But, these little set-backs are just what we need to take a giant step forward. Right, Kent? Needless to say, I was a little despondent about the melt down, but then, in the midst of my preparations for hari kiri, it came to me. It is possible to synthesize excited bromide in an argon matrix. Yes, it's an excimer frozen in its excited state.
--Real Genius
I didn't actually write anything in perl6 during the hackathon. I did get to read and edit some PIR (parrot code). I read the scripts that Aran and Shawn put together. I think Aran's "game" is even still live on Notice the empty button on the cave page, that's the result of the body of a for loop getting executed, even though the array is empty. Bug? Feature? Confusion?

I had a great time. I enjoyed hacking together a working environment and smoothing rough edges, that plays well to my skills. I hope to do something similar in the future, "social coding" is fun! Give it a try. Get involved! Your Language and Your World are what you make them!

Sunday, August 30, 2009

Book Review: Ace on the River

I just (8/29/09) finished listening to Ace on the River, by Barry Greenstein. Read by the author. This is a very interesting look into the life of a professional poker player. It is more a memoir than an "Advanced Poker Guide". It does include a lot of information on playing the larger game -- building a life while playing poker.

Thinks I liked:

  • Talking about the difference between tourney and cash game styles. Especially about how the cash game people look down on the tourney players, yet are jealous of the "respectability" afforded the tourney players.
  • The last section walks you throw several real tourney hands, and he walks you through how he played them, and what the optimal plays were. On the audio version, there are pauses to force you to think about the situation and make your own answer before he tells you his thoughts.
  • While tourney payouts are top heavy, so it's best to win. it is important to guard your chips more than a cash game, since there is no rebuy. This doesn't always mean cautious, tight play. Really it means evaluating where your stack is relative to the big blind -- and in a tourney the blinds are always moving up so they are a big chunk of the stacks. It may well be that you take a big chance when you're below 8 big blinds, because you've got to take it to stay alive.
  • Winning early hands gives chip power. Use your big stack "as a bludgeon" on your opponents
  • Conversly, realize the power of the short stack (more useful in a cash game). Easier to get odds to double you up. In a tourney, it's still nicer to have the big stack!
With a forward to Doyle, how could you not be interested?

Perl Iron Man Rules!

My Badge! Now that The Iron Man Badges are working, here's a quick recap of the Perl Iron Man competition rules and badges.
To stay in the Iron Man Challenge, you must maintain a rolling frequency of 4 posts every 32 days, with no more than 10 days between posts.

Badge Progression:

  1. Paper Man Paper Woman Start as Paper Man
  2. Stone Woman
  3. Stone Man After four weeks, move on to Stone Man.
  4. Bronze Woman Bronze Man "Get Further" you become Bronze Man. (12 posts?)
  5. IRON MAN IRON WOMAN 6 months of straight posting is IRON MAN.

Edit: updated with the female badges. It does not look good that two of those are missing!

Wednesday, August 26, 2009

August LA.PM update

August LA.PM meeting is tomorrow, Thursday August 27 at 7pm. See you then.
LA.PM homepage and directions

Our two speakers will be:

John Beppo
John will be talking about a COMET server he wrote in Perl, and integration into existing web applications.
Aran Deltac (bluefeet)
Aran presents: Destination Moose
Aran is an active CPAN developer, a Software Architect and Manager at ValueClick, the organizer behind the Thousand Oaks Perl Mongers, a member of the Perl Iron Man competition, and an advocate of Modern (aka Enlightened) Perl.
cpan: bluefeet
Thanks guys for agreeing to present! Looking forward to it!

Sunday, August 23, 2009

Perl6 hackathon

The kind fellows at Thousand Oaks Perl Mongers have moved the Perl6 hackathon from 8/22 to 8/29, so that I can make it. Thanks guys! So now I have 6 more days to get ready for perl6.

Thursday, August 13, 2009

August LA Perl Mongers Meeting

Hello Los Angeles!

What: Los Angeles Perl Mongers Meeting
When: 7-9pm
Date: Thursday, August 27, 2009
Where: The Rubicon Project HQ - 1925 S. Bundy, 90025
Theme: Perl!
Food: Pizza and beverage provided.
RSVP: Responses always appreciated.

Open Call for Presenters

What have you done recently with Perl? Come tell your friends and let us all learn together!


  1. Kenny Flegal: unknown presentation. He may be forced to cancel by an over-busy work schedule.
  2. You: talking 'bout stuff

About our speakers:

Kenny Flegal works at ValueClick, creating a new monitoring ubersystem. In his spare time he has a number of ventures to help lift others up by their bootstraps.

You are an awesome perler!

About your host:

  • Andrew Grangaard is a Senior Software Engineer at the Rubicon Project, and long time Perl Monk(ey).
  • The Rubicon Project (
       The Rubicon Project is an Advertising Technology Company headquartered in Los Angeles. Their mission is to automate the selling and buying of online advertising.

See you at 7 on Thursday the 27th

View Rubicon Project in a larger map

Wednesday, August 12, 2009

I am Iron Man!

Rock on. I'll see about getting them live this week. Thanks very much.
--Matt S Trout

Matt S Trout wrote:
> On Thu, Jul 30, 2009 at 08:14:32PM -0400, Andrew Grangaard wrote:
>> Matt,
>> I have my changes in a public github repository. They basically just sort the incoming posts by date when doing the comparisons.
>> repository:
>> git://
>> blogpost description:
> Rock on. I'll see about getting them live this week. Thanks very much.

follow-up to Iron Man Perl, Redux.

Sweet, we may get our badges soon! And Matt doesn't Top-post! Next thing you know, it'll be Christmas!

Monday, August 10, 2009

Finding Time

We always talk about "finding time" and "managing time", but of course we can't manage time -- it's going to tick by regardless. We can manage what we do, what we choose to do, what we accept from others. I'm failing, I have so much I have chosen to do. The queue length is increasing and I have so much pouring in from others (wife and work) that I need to say "no" to something. But to what?
Effective executives start with their time.

"Know thyself," the old prescription for wisdom, is almost impossibly difficult for mortal men. But everyone can follow the injunction "Know thy time" if one wants to, and be well on the road toward contribution and effectiveness.
--Peter Drucker

Most of the "no's" seem to be related to my personal projects. I have a few awesome code projects bubbling up to the surface. I've got a neat Moose+Catalyst app that I'd like to toss together and integrate into Facebook, and that whole project is just to encourage me along my other goals.

I'd like to have this ready to present to the Mongers, but when? And I need to send out invites for this month's Monger and especially find someone to present, since my scheduled speaker is talking of bailing -- because he too is too busy with work. Also, I finally have webdav access to the los angelese perl mongers site, so now that needs to get brought "up-to-date".

When will I get the time to hack around again?

While looking for a nice Drucker quote, I came across this article on Time Management looks interesting. Maybe if I had the time to read it...

Saturday, August 1, 2009

July Perl Mongers recap

July Perl Mongers was on Thursday, July 16. We had another great crowd of about 20 people. I found myself having to shoo people out of the office at 10:30pm so I could get some post-meeting work done. That's always a good sign.

David "Xantus" Davis started us off with a quick look at mojolicious. Mojolicious is an MVC framework built on top of mojo. It is very new and very hip -- and *very* light on documentation at the moment, so it was neat to see someone who was having fun using it. Thanks a lot for volunteering your time to present!

Mon-Chaio's talk on "Interviewing (Perl) Programming Candidates" was surprisingly well received. I knew he'd give a polished presentation but didn't know just how much conversation and debate it would engender. I liked that it wasn't just a thinly veiled recruiting attempt since we're hiring. And who can forget bringing back the term perler. MC: Can you send me your slides, and I'll link them from here.

Informal polling showed that "third thursday" doesn't work well, possibly because of but that people like Thursdays. In August we'll try "fourth thursday", August 27, 2009. See you then!

Thursday, July 30, 2009

clone remote git to github

I've just made my first remote git repository at github. It's a clone of Matt Trout's Iron-Munger code. Looking back through my zsh history for git commands, this roughly what I did. Comments inline in my awesome 'pre' block.


#setup get information from my new github repository
% git remote add origin
% git config --global "Your Name"
% git config --global "Andrew Grangaard"
% git config --global granny-github@XXXXXXX

#generated my ssh key, and added it via the github web interface.
% less ~/.ssh/

#added this key to my keyring.  This way I don't type my passphrase everytime, yet someone doesn't get access just by grabbing my key.*
% ssh-keygen -t dsa -f ~/.ssh/github_dsa
% ssh-add ~/.ssh/github_dsa

# get my area ready:
% git init
% git add README
#commit first commit locally.
% git commit -m "first commit"

#setup my remote repository, and push to it: (I wonder which of these worked)
% git remote add origin
% git push origin master
% git remote add origin git@github:spazm/Iron-Munger.git
% git push origin master

#pull in the repository I want to clone:
% git remote add upstream
% git fetch upstream
% git push origin master

# pull in the repository
% git pull upstream master

# push it to my copy at github.
% git push origin master

#start making local changes
% git status
% git add IronMunger/ IronMunger/
% git status
% git diff plagger_loader.t
% git add t/plagger_loader.t
% git status
% git add t/stats_saver.t

# commit locally
% git commit

# push to github
% git push origin master

#pull down any changes from the original master
% git pull upstream master

Iron Man Perl, redux.

Badges? We ain't got no badges. We don't need no badges! I don't have to show you any stinkin' badges!

Matt Trout writes in his blog about almost being done with the Iron Man Perl Judging software. He wrote his beautiful modern perl code to pass a suite of tests. But when he put it all together, the system didn't work. So he put out an open call to find out what's up.

This seemed like a good opportunity to do something useful for the community. Also, a chance to read some modern, moose-y perl, and do my first git checkout. Oh, and do some debugging.

I spent much of Saturday afternoon hitting "Y/Ret" in cpan to get my 5.10 debian install up-to-date with the moose core. Once I got my system up-to-date, I started running his tests. Sure enough, most of them passed. One of them didn't clean-up well so failed on subsequent runs. But that was minor.

I started by digging into the heart of the matter. The test calculate.t that verified the date calculation routines. Why did the tests pass and not with the demo data later via plagger_loader.t? I squirelled through the rest of the code, squinting at all the new funny moose bits and got a overall view of the plan.

By now it was late Saturday night, and I was having trouble sleeping. My mother-in-law was visiting, so we'd given her the bed and I was on the couch in the living room. By Santa Monica standards it was super hot (maybe 80F). Since I couldn't sleep, I pulled the laptop back out and jumped back to check one more hunch: "what is the difference between the data for calculate.t and plagger_loader.t?"

AHA! The age-old problem of real world data not being nearly as pretty as test data. All of the post data created in calculate.t was neatly sorted by increasing age. The csv data, plucked from real logs, was a hodgepodge of sorting. Maybe it was alphabetical, but it sure wasn't reverse chronological.

A quick change was called for: Sort the data by reverse date. Where to slip this into the API? 1) Sort the post order when reading the CSV in plaggerloader? 2) sort the post array in, either in check_both or check. Which change matches the intentions of the API designer? I don't know, that's up to you, Senor Trout.

This experience gave me my first interaction with git. See the next post for my blog entry on cloning remote read-only git to github.

These changes are now up on github.

In the mean-time, here's the ghetto-diff version:
Synopsis: add sort { $b->at <=> $a->at }

method _expand_posts_from_file(IO::All::File $file) {
    return [
      sort { $b->at <=> $a->at }
      map $self->_expand_post($_),
Synopsis: add my @sorted_posts = sort {$b->at <=> $a->at} @posts; and call check on @sorted_posts instead of @posts.
sub check_both ($check, @posts) {
  my @sorted_posts = sort {$b->at <=> $a->at} @posts;
  return min(
    $check->(1, 10, @sorted_posts), # 10 days between posts
    $check->(4, 32, @sorted_posts), # 4 posts within any given 32 days

There is something very satisfying about spending a day or more debugging in a process that eventually ends with adding five lines to the primary code base. It's a feeling of having cut through the accidental complexity to the heart of the matter. Finesse vs Force. Good thing I don't get paid by the KLOC.

Hope this helps! Now when do we get our badges?