Tuesday, October 27, 2009

LA Perl Mongers: Pushed back a week

Our next L.A. Perl Mongers meeting has been pushed back a week.
Instead of 10/28 we'll now meet on Wednesday, November 4th.

Topics:

  1. Tommy Stanton: Testing for the Win! Test::More, Test::Most and Test::Unit.
  2. Andrew Grangaard: either "Care and Feeding of third party modules, revisited -- a local::lib example" or "Hadoop with Perl"

la.pm.org

Wednesday, October 21, 2009

Rakudo October Release -- aka Thousand Oaks

Congrats Aran, Shawn and Todd!

Exciting news. Jon has decided to name the October Rakudo release after the TO group, due to the excitement and buzz from our perl6 hackathon.

Rakudo Perl follows a monthly release cycle, with each release code named after a Perl Mongers group. The October 2009 is code named "Thousand Oaks" for their amazing Perl 6 hackathon, their report at http://www.lowlevelmanager.com/2009/09/perl-6-hackathon.html, and just because I like the name :-)
-- Rakudo.org

Thanks for inviting me to the perl6 hackathon, that was loads of fun. Crazy to remember a time when I could spend 8 hours on a Saturday doing fun hacking and not working... Looking forward to doing it again, maybe hosting down here in Santa Monica (but it's hard to hack when the beach is right there, just calling). Maybe this time I'll even write some perl6. (though pir was fun too).


Subject:Rakudo October Release
From:Jonathan Scott Duff <perlpilot@<elided>.com >
Date:Tue, 20 Oct 2009 21:42:54 -0500
To: Andrew
Hello there, I'm handling the Rakudo release for October in a couple of days and I wanted to let /someone/ from TO.pm know that I've chosen Thousand Oaks as the code name for this release for two reasons:

1) You guys held a Perl 6 hackathon TO++
2) I just like the name "Thousand Oaks" :-)

Your blog was linked in the release document, so you're it for me contacting TO.pm.

I realize I probably could have just emailed TO's mailing list, but for some reason that didn't sit well with me. It felt as if it would be too abrupt. In any case, feel free to share the news with the rest of TO.pm (or just wait for the release annoucement if you want :-)

Anyway, cheers,

-Scott
--
Jonathan Scott Duff
perlpilot@<elided>.com

Monday, October 19, 2009

Cowboy culture.

Cowboys.

What is a cowboy, in this context? He's writing scripts, not programs not software. He's someone who runs off on his own, thinking he understands what his is working when he is really cargo culting and making guesses and assumptions. Loves making changes live in production and uses phrases like "I think it should work, right after this patch." Patching things up in a belt-and-suspenders way that obscures the original logic or business case. They have yet to see the light of testing (at all, let alone automated unit testing with high coverage), check things into source control after-the-fact which may or may not match what is in production (which well may vary from machine to machine). And "hey, this 6000 line function should basically work, most of the time, but when it doesn't I have errors sent to /dev/null in the crontab."

Let's say you're a cowboy or you work with one. How do you survive? Adapt? Reform?

Blast off and nuke it from orbit, its the only way to be sure.
--Aliens
If only we could just nuke it. But that's not going to happen until you have a replacement. The fact that the replacement is cleaner and more understandable won't be enough unless it also faster and provable more correct and quite likely it'll need to be bug-for-bug compatible with the old code.

What's the problem with just rewriting it all from scratch? First, code archeology time. When dealing with legacy code, that is spaghetti and undocumented and untested, you will spend most of your time figuring out what it does and why. Is it important that this test short circuits? Why is most of the logic up here, but some is down there? Does the sort order of this array of keynames matter? Are there implied dependencies?

Now, the only reason you're in there is that something is broken, and now someone has allocated some time and resources to fix it, firefighting style. Otherwise, no one wants to go near that code. Especially given that the code owner seems to be putting in herculean efforts to keep it running (staying up late most nights handling error escalations that ring his pager at 4am). Of course, it also probably terribly important code to the business (it handles logs or stats or something similar in the direct money path) so it HAS TO BE DONE! OMG OMG THE SKY IS FALLING FIX IT RIGHT NOW!

Resist the urge to just jump in and make that one change. Yes, someone will be yelling "why isn't this ready" and you'll have to be firm on your reply that you can't know that simple change won't affect the whole system negatively in a chaotic system. Blind refactoring my just further hide the business logic and cause you to rewrite and obscure current bugs. You have to assume there are other problems than the one you are fixing, just no one has noticed the others (or noticed and failed to report). You don't want to be the one called at 4am because your simple fix took payroll offline when the 3am job kicked in and was expecting some old, broken format.

I recommend a two fold approach. First you start with a light bottom-up refactor. Trim the lexical variables down to their minimum needed scope, and change the seven $t variables into useful names that match their scope (big scope == big more descriptive name). Pull blocks of that behemoth function into smaller blocks. Find a test case that exercises the important features, and make sure the original code runs on it reproducibly, giving the same output each time. Really start with this test case. I've been burned so many times chasing my tail because my test output doesn't match the original, due to some random or non-causal output from the original code. Then find any external dependencies (the database, time-of-day, phase-of-moon) and start thinking about how you'll test them.

Once you have that done, start working top-down. You can't do this step until you've been a bit steeped in the code as you won't know the right questions to ask the business sponsors. You have to know how it does things without letting it set your mind-view of how things will be done -- a fine line to tread. Now we look at the problem and the problem domain and wonder if this approach is still valid, given the way the data and data model have changed. Can you use bring some patterns into the code, separate out pre-processing and report definition and number crunching? What can you learn from the evolution of the old code, over multiple passes of tweaks and updates about what we've learned from the business? There is value in that knowledge, if you can separate the wheat from the chaff, the important changes from the incidental, accidental and cargo-cult-copy-and-pasted changes. Build your modules from the ground up to be reusable and modular and yet designed for the business case that the current script handles. Test as you go, you'll be so much happier.

Now, you have your middle rewrite written. It has some of your top down and all of your bottom-up changes. Test it against some minimal output, comparing with the old script. You're going to be running this test a lot, please write a script to automate it. You'll thank yourself later which is nicer than cursing yourself out later because that 1/2 hour test has gone wonky because you broke your shell command history. Now you'll be adding bug-for-bug compatibility, to make sure your code produces output to match current production. Add those bugs, really. And then document them in your bug tracking and make sure they really are bugs and not "oh my goodness, of course I need the fact that the reports come out sorted by the third character of the report name" expected functionality by someone.

When they match, make the switch. Now there should be no visible difference from the outside. But now you can go to town on your in-between scaffolding code. That's why you put in all those unit tests. Now you can hack up chunks of internals and know you aren't affecting the eventual output. Soon you'll have a business critical chunk of software that won't call you for help at 4am, a program you're proud to have rescued from it's prior life as a "script written by a cowboy."

Update: Todd sent me links to two of his cartoons from asciiville, from when he was dealing with a "slew of these cowpies".

BTW: I loved the Cowboy Culture post. I had to fix a slew of those cowpies about a year ago, and I drew a couple of toons as a release. I thought you might enjoy these as we are on the same wavelength with respect to cowboy coding.. :)

bedtime stories

the new recruit

Monday, October 12, 2009

Congrats on the new book, Wei-Hwa!

Wei-Hwa Huang has a new book out, Mutant Sudoku. As one of the world's top puzzlers, I'm sure he'll bring an interesting take on the game. Congratulations on your new book!

I remember when he first introduced me to Sudoku via a terrible pun involving a certain Count from Star Wars, about 2 years before the US sudoku craze began. I really didn't appreciate the quality of the joke until much later, and now I can't seem to find that quote in the gale logs. I did find a mention of the sudoku t-shirt he made in silk screening at Caltech, circa 1996. So clearly he's been aware of these puzzles for a long time.

My sophomore year I took the Putnam Exam, and was happy to emerge from the test alive, I think I even got a few points of partial credit. Wei-Hwa, as a frosh, did amazingly well. Now that I look it up, I see he was a Putnam Fellow in 1993. Top 5 in the country. Wow.

Seriously, how many people do I know with their own wikipedia entry?

Sunday, October 11, 2009

LA perl mongers October update and September recap

September's perl mongers meeting was awesome. We had two presentations (both from me!). The meeting for October will be Wednesday October 28th. The first presentation was an example of getting work done using perl. Specifically using JIRA::Client (a thin wrapper around SOAP::Lite) to access a JIRA bug tracking installation to pull bug counts. Slides included fully working example code. The author of JIRA::Client commented on the blog post. That is so exciting! That's what community and social coding feels like.
Nice example. It inspired me to make it easier to get from filter names to filter ids. I just released JIRA::Client version 0.16 which implicitly casts filter names into filter ids in the calls to getIssueCountForFilter and getIssuesFromFilterWithLimit.
-- Gnustavo
The second presentation was a discussion of "care and feeding of third party perl modules." We started with my blog post and went around discussing what approaches people had tried, which ones they liked which ones they found lacking. Tommy was kind enough to run the video camera for part of the discussion, so once I transfer the tape, we'll have something to upload.

Some of the main points to come out of the discussion were: the importance of staying up-to-date, of having unit-tests of the features you expect and use from external modules, green field testing (make sure you can build it all from scratch in your test environ). The need for a company to institute some sort of revisioning on top of CPAN came up a few times.

Some novel ideas included: source control hooks that check for external modules used in a given commit and updating all those modules to current release, requiring the programmer to verify that his/her checking works with current modules; a single repository for third party modules and other code, that can be easily pulled into any internal project or repository (local::lib helps here); considering what is pushing you to be "up-to-date" as maybe you don't actually need it (sacrilege!)

Some related tasks that need to happen soon: upload the video and notes from the second presentation before the October meeting. Update the website with the October date Edit:done. Put up the slides for the JIRA::Client talk. Find a speaker for October ( Tommy signed up, but then had to defer to November). Find a November date to work around Thanksgiving. Decide if we need to cut back to a single speaker (or two speakers every two months).

Tuesday, October 6, 2009

risks and mistakes

People who don't take risks generally make about two big mistakes a year. People who take risks, generally make about two big mistakes a year.
-- Peter Drucker

This was our quote of the day yesterday at work. Serendipity that Mallory would pick one of my favorite quotes on the one year anniversary of my coming to work for the Rubicon Project.

I think I've made my share of mistakes over the past year -- mostly from not taking big enough chances not adapting and changing quickly enough. Something to think on during the coming year.

Sunday, October 4, 2009

vimdiff ... where have you been all my life?

I'm finally giving vimdiff a try. I normally get by with diff, diff -u (unified) and diff -u -w (unified, ignore whitespace) and their ssh equivalents svn diff (unified) and svn diff -x -w (to ignore whitespace).

I know vimdiff exists, and have used it trivially once or twice. But never felt a need to dive in.

Right now, I'm looking at a file of a coworker's modified code, trying to figure out which version it originally corresponded to. I've looked at the diffs, and I think I know which version it is, but I was having trouble comparing the lines to see what some of the diffs mean.

I popped it up in vimdiff vimdiff file1 file2 and now I have a lovely side-by-side view of the two files, with coupled scrolling. Chunks of unmodified code are folded and out of the way. The vim folding commands work normally: zo will open a given block if I need to see that code and zc will close that block back up. zA to open all and zC to close all.

The normal vim window/frame commands can be used to switch between the two frames. Since scrolling in either file scrolls both files, there isn't a big need to switch between the frames, except when examining long lines. By default, line-wrap is off for the diff, so long lines appear truncated until the viewport is scrolled. control-w w will switch between the two frames. Jump to the end of the long line with $. Again, both frames will scroll together left/right just as with up/down. Alternatively, :set wrap in command mode will enable word wrapping, this needs to be done in each frame independently. If literal tabs are making your lines too long, try displaying tabs as smaller entities: four character :set ts=4 or two character :set ts=2. Again, this must be applied to each buffer independently.

I really like the intra-line difference highlighter. The whole line is highlighted, but the changed portions are highlighted in a different color. Purple and Red respectively in my setup. That helps me pinpoint the exact character changes, so I can focus on see the "why" of the change instead of digging for the "what".

vimdiff and svn is not an either/or proposition. svn allows an external diff command, via the --diff-cmd switch. Unfortunately, vimdiff doesn't work out-of-the-box with this flag, as svn adds extra information when passing to the diff program. A have a very short wrapper called vimdiff-svn-wrapper that I use to drop the extra arguments. I have this in my path and use svn diff --diff-cmd vimdiff-svn-wrapper filename to run svn diff on filename, displaying the output in vimdiff.

On the other end of the spectrum is svnvimdiff from vim.org. This runs vimdiff on the output of svn diff. It's messy the way it uses temp files and I just tried the version I downloaded last year it didn't work for me. I've just written a new version. Had I checked the link, I'd see the original is on version 1.5 and I have version 1.2. My version uses the vimdiff-svn-wrapper with svn diff --diff-cmd. I have directly copied his clever method of getting the modified svn files by parsing the output of svn stat.

Time to get back to figuring out the changes in his code...

Dear Moose

Dear Moose,
CC: TDD

Just a quick note to say, "Thanks for being awesome!"

Hanging out with you both this weekend was awesome. I love the little test-driven proof-of-concept program I wrote with you guys. It was a blast!

I wish I could share the code with my other friends, but you know it is Antony's project and he's a bit paranoid that his idea will escape into the wild. Normally, I think that's a silly attitude, but in this case I understand. His project is definitely not something I'd thought of or considered previously and after a first stab of research it still seems novel. More importantly, after he described it I couldn't stop myself from blurting out: "I WANT ONE!" Which is a good initial reaction for a consumer product.

I had ideas for how to build a simulator for the concept bouncing around my head all week. I finally got both time and energy together on Saturday night. Moose, making objects with you let me focus on the features rather than the boiler plate. 10 has lines later and I had a loadable module. These should be Ints, these should be Bools, this one can't be changed, I told you and *poof* my module had input verification. Very slick. Then I got started writing tests for new features, so I could write the features.

Write test, fail test, write feature, pass test.
--TDD
TDD, it seemed so strange when you first said it, but I'm starting to get it now. Its getting to be a real rush to test the code and see it pass. I'm glad our buddy VIM was there to help, :make for the win.

The design is simple enough that refactoring becomes more obvious. With the minimal boilerplate overhead, it was easy to pull my messages out into first class objects. Maybe that will eventually become a role?

Here's a look-alike of the test I wrote to verify that I could build the objects and get a message object to pass between them.

Looking forward to hanging out with you again soon. Take care!

peace,
Andrew

Thursday, October 1, 2009

local::lib

I used local::lib this week. Wow, it really is a nice way to install cpan modules into my source control tree, to build an app-specific perl5 lib.

Tuesday, September 29, 2009

Book Review: Ace on the River

I just finished (Aug 28th listening to Ace on the River, by Barry Greenstein. Read by the author.

This is a very interesting look into the life of a professional poker player. It is more a memoir than an "Advanced Poker Guide". It does include a lot of information on playing the larger game -- building a life while playing poker.

Thinks I liked:

  • Talking about the difference between tourney and cash game styles. Especially about how the cash game people look down on the tourney players, yet are jealous of the "respectability" afforded the tourney players.
  • The last section walks you throw several real tourney hands, and he walks you through how he played them, and what the optimal plays were. On the audio version, there are pauses to force you to think about the situation and make your own answer before he tells you his thoughts.
  • While tourney payouts are top heavy, so it's best to win. it is important to guard your chips more than a cash game, since there is no rebuy. This doesn't always mean cautious, tight play. Really it means evaluating where your stack is relative to the big blind -- and in a tourney the blinds are always moving up so they are a big chunk of the stacks. It may well be that you take a big chance when you're below 8 big blinds, because you've got to take it to stay alive.
  • Winning early hands gives chip power. Use your big stack "as a bludgeon" on your opponents
  • Conversly, realize the power of the short stack (more useful in a cash game). Easier to get odds to double you up. In a tourney, it's still nicer to have the big stack!
  • The emphasis on maintaining a life outside of the game.
With a forward to Doyle, how could you not be interested?

Sunday, September 27, 2009

Business Motivation

An important message and a beautiful example of design.
Enlarge the image to see the full message.

Intrinsic > Extrinsic

Found this on Dan Pink's blog.

Wednesday, September 23, 2009

LA PM tonight.

I'm Looking forward to seeing you tonight at 7.
1925 S. Bundy, LA, CA, 90025.

I'll be your speaker tonight, as our originally planned speaker has an injury and can't make it. I've put together a presentation and a Q&A discussion.

Presentation:
* Accessing the JIRA SOAP api using JIRA::Client.
JIRA::Client is a thin wrapper around SOAP::Lite that provides a few additional convenience methods. I'll compare and contrast a perl and ruby implementation of a simple script to query bug counts.

Discussion:
* Care and feeding of third party Perl modules at work -- how do you do it?
I've asked a few of you this question, and heard a variety of answers. I think it'll be interesting to discuss the various strategies that are out there. How is this not a solved problem that everyone knows? (Or am I just out of the loop?)

I have blog posts on both of these, if you'd care to comment before the meeting: access-jira-api-from-perl-with
care-and-feeding-of-third-party-modules

Info, directions and parking map: losangeles.pm.org

Tuesday, September 22, 2009

cryptic git status: "failed to push some refs"

A first stumble with git: error: failed to push some refs to 'git@github.com:/spazm/examples.git'.

And here's how I fixed it.

[agrangaard@willamette]% git fetch                               129 ~/examples
[agrangaard@willamette]% git push                                  0 ~/examples
To git@github.com:/spazm/examples.git
 ! [rejected]        master -> master (non-fast forward)
error: failed to push some refs to 'git@github.com:/spazm/examples.git'
In my case, this meant I had a merge conflict. git status showed no changes and a git fetch wasn't listing an action nor listing any changes.

Once I realized I needed to do a git pull, more specifically a git pull origin master, the merged code was pulled in.

[agrangaard@willamette]% git pull origin master                    0 ~/examples
From git@github.com:/spazm/examples
 * branch            master     -> FETCH_HEAD
Auto-merged jira.conf
CONFLICT (content): Merge conflict in jira.conf
Automatic merge failed; fix conflicts and then commit the result.
[agrangaard@willamette]% git status                                1 ~/examples
jira.conf: needs merge
# On branch master
# Changed but not updated:
#   (use "git add ..." to update what will be committed)
#
#       unmerged:   jira.conf
#
no changes added to commit (use "git add" and/or "git commit -a")
[agrangaard@willamette]% vi jira.conf                              1 ~/examples
Since there was a manual merge needed, I opened jira.conf and cleared out the diff. Then I committed it locally and in the master repository:
 [agrangaard@willamette]% git add jira.conf                         1 ~/examples
[agrangaard@willamette]% git status                                0 ~/examples
# On branch master
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
#       modified:   jira.conf
#
[agrangaard@willamette]% git commit                                0 ~/examples
Created commit e69f257: Update jira.conf
[agrangaard@willamette]% git status                                0 ~/examples
# On branch master
nothing to commit (working directory clean)
[agrangaard@willamette]% git push                                  1 ~/examples
Counting objects: 13, done.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 1.71 KiB, done.
And now my local check-out is in sync with my remote repository at github. Woo!

Care and feeding of third party perl modules at work -- how do you do it?

How do you install and maintain third party perl module dependencies in your work perl code base? How about your own perl modules?

I'll be hosting an open discussion on this tomorrow at the Los Angeles Perl Mongers meeting tomorrow. (Wed 2009-09-23). Let's see what we can dig up.

I've been at a few perl companies now, and they all seem to have taken different approaches. Yes, TIMTOWTDI, but TIMTOWTDIWrong aka TIMTOWTFIU. This seem like it should already be a "solved problem," so what are the best practices in this arena?

Popular memes in this space:

  • System Perl
    • Install to system perl via cpan. Upgrade whenever.
    • Install to system perl via package manager, deb or rpm. Manually roll unpackaged cpan modules through a tool like cpanflute.
    • Sub part 1: install those rpms/dpkgs manually
    • Sub part 2: run a local yum/apt-get/dag repository internally to pull "trusted" internally packaged cpan modules
    • Install via system tools (puppet, etc) live on each box from a configuration file of needed modules.
  • (shared) Application Perl, compile your own single perl version for internal applications
    • Install into the application perl via cpan
    • check into local source control source for third party modules, install during application install.
    • check into local source control binary version of third party modules.
  • Install into local library, using Local::Lib with either application or system perl.
  • Install from a "mini cpan", maintained internally and populated with vetted, approved third party modules.
How do you do it?

If you're just installing a handful of modules, the ad-hoc approach may be sufficient. Right now I'm writing some new apps with Moose, and installing that module dependency chain seems crazy without figuring out a plan. With Moose and the MooseX namespace there are probably 20+ modules I want to install, and many of them are improving and releasing rapidly. Edit: Sorry Dave, you're right, not hundreds of modules. I didn't mean to spread Moose FUD, I love the Moose

Plan: flesh out this page with links to the current wisdom on these matters, from fellow iron mongers, perl monks, etc. I know I've ready half a dozen recent articles on this, time to find and organize them.

Access the JIRA API from Perl with JIRA::Client

Quick problem: I want to know the bug count for the upcoming release, which encompasses two jira projects. I'd like to know the count of open, closed and resolved bugs for each of the two projects and combined. I have created several stored filters that show me the bugs/issues/tickets in each category, and a count.

Because the issues are in multiple projects, combining them in one search and filtering by closed status is daunting directly in Jira.

A quick CPAN search later and I had JIRA::Client installed on my workstation. After jumping through a few hoops I got the JIRA admin to enable SOAP API access. API access is installed but disabled by default.

The hardest parts was finding the correct API docs for our version of JIRA. I didn't think 3.11 to 3.13 would be a big difference, but it was.

Tonight a threw together a quick script to grab the bug counts across each of my 6 saved searches and return the bug count. It started out looking like this: Eventually I prettied it all up with flags, config files and documentation and pushed it to my github repository named "examples": jira.pl

Is there a way to embed a file from my github the same way as I can for gists from gist.github.com?

Pair Programming with gnu screen

Good ole' screen, nothing beats screen
-- bart simpson (almost)

Setting up a screen session for sharing

  1. screen needs to be setuid root for multiuser sessions to work
  2. set the multiuser flag to on
  3. use acladd or aclchg to add acl rights for other users. This must be done after enabling multiuser.

make screen setuid:
ls -l $(which screen)
sudo chmod +s $(which screen)

Note in zsh this could be simplified by using = to replace the $(which ) block.
ls -l =screen;
sudo chmod +s =screen

Update: On recent ubuntu installations (9.04 and newer), /usr/bin/screen is a wrapper around /usr/bin/screen.real. /usr/bin/screen.real is the file that needs to be made setuid.

sudo chmod +s /usr/bin/screen.real

Note: I'll use "^A" to represent the screen key.

^A:multiuser on
^A:acladd usertwo
Now usertwo can see the screen session of userone with the following command (The trailing slash is important!)
  screen -ls userone/

And connect with either of:

  • screen -x userone/
  • screen -x userone/name-of-session
In the latter case, name-of-session is used as a search string within the available named screen sessions.

If you used acladd, then usertwo comes in with full capabilities -- write and execute. It is possible to restrict users to write bit (allows sending chars to the screen, assuming no one else has write-lock) and execute bit ( allows screen commands to be run ). These commands can be granted on a per-screen and per-user basis.

Check out more on the screen man page.

Now, you and your remote pair can get back to work. One of you coding, one of you watching. Both of you creating.

Hints:


  • try and get this set up and running while userone and usertwo are both local so it is easier to debug.
  • you'll need to own your own pty. This is not normally a problem, unless you are using su or sudo to change users, which you may want to do while testing.
  • don't forget to set multiuser to on *before* trying to set the acl commands, I think that was important.
  • Use an out-of-band method for communicating. I like a VOIP connection or even a ytalk or irc back channel. ( I'm still waiting for a jabber based tool that will do all of this for me, all while integrating into vim. )
  • give your screen session a useful name with the -S flag, like screen -S shared or screen -S bug_1234
  • I have vim on one screen, open with my perl module and my .t test (vim -o foo.pm t/foo.t), and a second screen primed for running the test file. If test output is short, I just run it in the vim session with :make , but for long, slow, verbose tests, I run them in a second window. This lets my screen partner run tests at his/her leisure while I'm typing code.
  • set your shell to auto-update the screen title for a given screen when a command is run.
  • use double-quote ^A" to get a named list of screen windows, choose the window to switch to by number, j/k or arrows. These two make it easier to follow what is going on.

Tuesday, September 15, 2009

Welcome Aboard: Others Online

The Rubicon Project announced the acquisition of Others Online today. Welcome to the team guys! It's nice to finally be able to talk about my new coworkers.

Official announcement

With today’s acquisition of Others Online, Rubicon Project aims to provide its publisher customers a way to find the best data on audience segments from among all those data providers. Others Online, which apparently started out several years ago as a social network play, offers several audience data-related services—in particular an “affinity scoring” service that determines how strongly a person is interested in particular brands, products or topics.
-- Businessweek

Monday, September 14, 2009

Motivation -- Autonomy, Mastery and Purpose

"Autonomy, Mastery and Purpose. It's not Utopian, I have proof."
--Dan Pink.

This TED talk from Dan Pink on the economics of motivation. This is an awesome presentation with good information and a compelling speaker. The thesis: Knowledge work doesn't improve with higher incentive rewards, in fact it may be hindered. Why is there a mismatch between what Science KNOWS and what Business DOES?

This has been replicated over and over and over again for nearly fourty years. These contingent motivators, "if you do this, then you get that," work in some circumstances but for a lot of tasks they don't work or often they do harm. This is one of the most robust findings in Social Science and also one of the most ignored.

The talk dovetails nicely into the book I'm reading Predictably Irrational by D. Ariely, which I would suggest for further reading. As well as Management Rewired, which is next on my reading list.

Two quotes from Ariely used in this talk:

"As long as the task involved only *mechanical* skill, bonuses worked as they would be expected: the higher the pay, the better the performance."

"But once the task called for "even *rudimentary cognitive skill*," a larger reward "led to *poorer performance*."

What would Peter Drucker say? I think he'd approve. Drucker "maintains that people motivate themselves. You can't motivate them; you can only thwart their motivation.To be an effective leader you must recognize that the business you're you're in is the obstacle identification and removal business. "source.

Sunday, September 13, 2009

September LA Perl Mongers Meeting

See you all in a week-and-a-half! Please let me know if you are interested in presenting!

What: Los Angeles Perl Mongers Meeting
When: 7-9pm
Date: Wednesday, September 23, 2009
Where: The Rubicon Project HQ - 1925 S. Bundy, 90025
Theme: Perl!
Food: Pizza and beverages provided.
RSVP: Responses always appreciated.

Our website has been updated with the current info as well as titles of past talks.

I haven’t recovered enough from the previous meeting to write up a recap, but it was definitely off-the-hook. We ranged from co-routines to Moose to Jaeger shots, with the last guests stumbling off at 1:30am after the poker game.

Thanks,
Andrew

Friday, September 11, 2009

Remembrance.

Mallory Portillo wrote:

In honor of the fallen (friends of the Rubicon family):
Todd Beamer
Scott Weingard
John Murray

It is often said that out of great hardship comes strength. Instead of letting the tragic day tear us apart as a country, we came together as a nation and mourned together, helped each other, and shared memories of the event and of the fallen. It is our saddest day but likely our proudest moment. We showed, as a united country, that freedom is a strength not a weakness and in the face of great tragedy we stop at nothing to come together and hatred cannot tear us apart.

Eight years later, we will all likely remember the exact moment when we saw the first plane hit the World Trade Center but more importantly this day will always be a time for us to reflect on what is so great about this nation and its people. Today in the LA office, we are having a pot luck to share food amongst friends and also raise money for a local non-profit. In the spirit of 9/11, please take a moment wherever you are to share with others (even if it’s just a smile on the street or an email to an friend).

To those who knew Todd, Scott and John, my condolences on the tragedy of your loss.

Thank you for sharing this act of Remembrance. None of us are alone in our grief, for we are all united. Let us hope that the legacy of this day will be of a united, strengthened people and a lasting day of Service.

This is my third anniversary back in California and it has felt so strange for this to be a "normal day" for so many people. A day of Remembering puts into stark relief the pettiness of our regular days and reminds me how lucky I am to be loved and living a wonderful life. Mallory, thank you for reminding us why the Rubicon Project is such a special place.

peace and love,
Andrew

"On a day when others sought to sap our confidence, let us renew our common purpose, let us remember how we came together as one nation, as one people, as Americans united, Such sense of purpose need not be a fleeting moment."
— President Obama, September 11, 2009.