Saturday, December 1, 2012

Perl Advent Calendars: 2012

Move on over, Movember! Happy December!
Make room for Perl Advent Season!

We are blessed with many perl themed advent calendars. I'm so excited to have so many squares to open! Now, if I could just get my article(s?) for the perl advent calendar finished(started!!!)

Yearning for something more active than just reading an article each day? Make a pull request to your favorite OSS projects with 24 pull requests, and brighten immeasurably the day of your favorite developer.

Perl Advent Calendars

Perl Advent
http://perladvent.org/2012/
(Formerly the perladvent.pm.org calendar)
Perl Dancer -- the dancer mini web framework
http://advent.perldancer.org/2012/
OX -- a web anti-framework
first time advent calendar!
http://iinteractive.github.com/OX/advent/
Perl 6!
http://perl6advent.wordpress.com/
For the adventurous: Japanese Perl Advent Calendars, 2 different tracks:
http://perl-users.jp/articles/advent-calendar/2012/
http://perl-users.jp/articles/advent-calendar/2012/hacker/ Hacker Track
http://perl-users.jp/articles/advent-calendar/2012/casual/ Casual Track

Retired Advent Calendars:

Catalyst Advent Calendar -- The Catalyst Web Framework
The catalyst advent calendar has been retired, replaced by a monthly series. The past 7 years of calendars are available.
http://www.catalystframework.org/calendar/
Ricard's advent Calendar -- a month of RJBS. Not updated since 2010, but he did give us a Hanukkah calendar last year!
http://xn--8dbbfrx.rjbs.manxome.org/2011/ Hanukkah 2011
http://advent.rjbs.manxome.org/2010/
Plack advent calendar: Not updated since 2009.
http://advent.plackperl.org/

A bonus list

for the sysadmin and web geeks in your life:
SysAdvent - The Sysadmin Advent Calendar.
http://sysadvent.blogspot.com/
24 ways - Advent Calendar for Web Geeks.
http://24ways.org/

Monday, November 19, 2012

Fingerworks Touchstream on Mac OSX 10.7.4

Woohoo! "New" Fingerworks Touchstream keyboard arrived today. This is my first Touchstream. It's from before Fingerworks was bought by apple and shuttered, circa 2005.

It works out of the box, but for the full experience requires running the configuration software to change the chord/multitouch bindings. Getting this running on modern hardware is challenging.

  • the company website is down.
  • The installer is for powerpc only (no longer supported by apple).
  • the application itself is a java app that requires an old version of java (no longer supported by apple).
  • the java app uses opensource jusb, which has been mostly abandoned.
The awesome people at the fingerfans message board have done a lot to keep these beloved pieces of future-tech up and running. They have a copy of the original website, the original help forums, original software, third party software, manuals, pds, and instructions for repair. I've seen posts on replacing the fpga, which is just NUTS-slash-Awesomesauce.

My steps:

  • install 1.5.3 software.
    Download this custom installer for linux and ran it on my mac: 1.5.3 software
    wget http://fingerfans.dreamhosters.com/download/setupfw153_noJava.bin
    sh setupfw153_noJava.bin
  • update jusb
    download a patched jusb from github, build and install into /Applications/FingerWorks/
    git clone https://github.com/DanThiffault/jusb.git
    cd jusb
    make
    cp -r libjusbMacOSX.jnilib* /Applications/FingerWorks/lib/jusb/
    cp jusb.jar /Applications/FingerWorks/lib/jusb/
  • install an alternative run script mtu_run.sh into /Applications/Fingerworks
    wget -O /Applications/Fingerworks/mtu_run.sh https://raw.github.com/gist/1096642/9004f21e6697fa080bb1ddde95f8a2a9d2bccae5/mtu_run.sh
    chmod a+rx /Applications/Fingerworks/mtu_run.sh
Now launch from the command line:
/Applications/Fingerworks/mtu_run.sh

Success!

The multitouch tool aka fingerworks.firmup.UtilityLauncher launched and detected my "TouchStream ST/LP ver 1.6". [RUN Diagnostics...] reported:
All sensor array tests PASSED!
Loaded 1243 Key/Gesture Mappings SUCCESFULLY
    Keymatrix#: 34
Testing Complete.

Not so fast: Can't write to device

Doh. Seems I can run the diagnostics, but I can't push a new configuration onto the device. That's a major bummer. I'll have to look into the java errors and see what can be done.
Starting transfer...
        Writing MTS_config Binary to:  /Users/andrew/Documents/MyGestures/custom4f0040stealth34.byt
        Sending configuration to Gesture Processor...
          (Sending DeleteMsg w/ minfirmver 326, minsurfver 7, keymatrixver 34
          (Sending user options)
          (Sending 16 macro definitions)
          (Sending 100 tapareas)
          (Sending 0 switches)
          (sending -1 hand)
          (sending 1 hand)
          (sending 2 hand)
          (sending 0 hand)
          (Sent 642 total events!)
...finished merging /Users/andrew/Documents/MyGestures/custom4f0040stealth34.byt
S8 Terminated with FLASH image CRC32: 0x6f1f72da
new  idDevice: 0x160, idProduct: 0x90b,  idVendor: 0xe97
USB DFU suffix appended to: /Users/andrew/Documents/MyGestures/custom4f0040stealth34.U.byt
        MTS_config Binary /Users/andrew/Documents/MyGestures/custom4f0040stealth34.U.byt ready for transfer! 
        existing  idDevice: 0x160        idProduct: 0x90b        idVendor: 0xe97
Java computed firmware image CRC32 0x6f1f72da on 32870 bytes
Exception in thread "Thread-8" java.lang.IllegalAccessError: tried to access class usb.linux.DeviceImpl from class fingerworks.firmup.USBupgrader
        at fingerworks.firmup.USBupgrader.a(Unknown Source)
        at fingerworks.firmup.USBupgrader.downloadFirmwareFile(Unknown Source)
        at fingerworks.firmup.USBupgrader.send2GestureProcessor(Unknown Source)
        at fingerworks.firmup.a.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:680)
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextGetCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextSetBaseCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextGetCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextSetBaseCTM: invalid context 0x0

References

Wednesday, October 17, 2012

GNU screen clipboard to X11 clipboard integration

Now that we have Clipboard cut-and-paste working in remote vim, let's get GNU screen to interact with the X Clipboard. This is useful when copying a large scrollback buffer into a browser app or email client. I can now copy from my screen session to my local OSX client and back!
  1. Install xsel
    sudo aptitude install xsel
  2. Add to .screenrc:
    # read and write screen clipboard to X clipboard.
    bind > eval writebuf "exec sh -c 'xsel -bi </tmp/screen-exchange'"
    bind < eval "exec sh -c 'xsel -bo >/tmp/screen-exchange'" readbuf
    
  3. ...
  4. profit

How it works

GNU screen has a built-in cut-and-paste metaphor. We leverage two new keybindings C-A > and C-A < to exchange screen data with the X11 Clipboard. The OSX X11 app then pushes the clipboard changes into the local OSX clipboard.

C-A > dumps the current screen paste buffer to /tmp/screen-exchange, and then uses xsel to push the contents of /tmp/screen-exchange to the X11 Clipboard.

C-A < uses xsel to pull the X11 Clipboard contents to /tmp/screen-exchange and then populates the screen paste buffer with the contents of /tmp/screen-exchange. At this point, the normal C-A ] will paste the data.

xsel needs a valid DISPLAY configured to interact with X. If using a remote a screen session, you'll need to forward your X connection and make sure your DISPLAY var is valid inside of your screen session. For more details, see my OSX Remote VIM Clipboard post.

See the commit to my screenrc in my config repository.

Update:

Remove the -n flag from xsel -bi.

Tuesday, September 25, 2012

Lambda Architecture

aka "Runaway complexity in Big Data, and a plan to stop it."

Nathan Marz's talk tonight at Strangeloop coined the term "Lambda Architecture" to describe a hybrid batch+realtime data engine built on functions running over immutable data. This builds on themes from his "Big Data" book.

The pieces all exist, but there's no simple packaging over all of them : distributed raw data store, map-reduce for batch (hadoop/mapr with pig, hive, etc) to precompute views that are stored in fast-read, map-reduce-writable DBs (voldemort, elephantdb), storm for streams, high throughput/small volume db for the storm output (cassandra, risk, hbase), and a custom query merge on top of both. There's no pre-made piece for the custom query merge, possibly storm works there.

Exciting and awesome!

slides and a HackerNews discussion

Monday, September 17, 2012

OSX + remote vim clipboard sync

IT WORKS! <sfx: evil genius laugh />

A few pieces are required to get smooth integration of the local OSX clipboard with the remote vim clipboard. I'll walk you through the configurations and you'll be cutting-and-pasting like it's no big thang. Pasting large blocks of text works so much better via "+p than via system paste into an xterm vim window.

Puzzle Pieces:

  • OSX clipboard syncing with the local X11
  • X11 forwarding in ssh
  • vim compiled with with +xterm_clipboard setting.
  • optional: configure vim to use xterm clipboard by default
  • optional: a better OSX terminal: iTerm2
  • optional: screen + DISPLAY var

OSX clipboard sync with X11

  1. launch X11 (Applications::Utilities::X11)
  2. open pasteboard preferences (X11::preferences::Pasteboard)
  3. check:
    1. enable syncing,
    2. update Pasteboard when CLIPBOARD changes,
    3. update CLIPBOARD when Pasteboard changes
  4. you may want to quit X11 now to ensure the new settings are saved.
  5. Note: don't set update Pasteboard when CLIPBOARD changes, as it produces a very strange paste behavior where full lines will paste as relative.

ssh X-forwarding:

You can enable this on the fly via the -X flag to ssh or by adding "ForwardX11 yes" to your .ssh/config file. ForwardX11 can be set globally or per-host.
Example .ssh/config entry for my vm:
host vm53
  user vm53
  ForwardX11 yes
The forwarding provided via ForwardXll is seen as untrusted by the X Security extension. Untrusted clients have several limitations: they can't send synthetic events or read data from other windows and are time limited.

If you really trust the remote host you can use Trusted forwarding. This is enabled with the -Y flag to ssh or the "ForwardX11Trusted true" option in .ssh/config. I've switched to using trusted connections when connecting to my local VM since my connections are open for days/weeks at a time.

host vm53
  user vm53
  ForwardX11Trusted true

Vim with +xterm_clipboard

Check the capabilities of your vim via vim --version, you're looking for +xterm_clipboard.
vm53% vim --version | grep xterm_clipboard
+xsmp_interact +xterm_clipboard -xterm_save
If your version of vim doesn't have xterm_clipboard, try another package. I'm using vim-nox for my debian/ubuntu machines.

At this point, you should be able to cut and paste using the + buffer to interact with the system clipboard. Paste with "+p and copy/yank with "+y. Under X the clipboard is in the "+" buffer, under windows it is the "*" buffer. In OSX gvim, "+" and "*" appear to be the same buffer?

configure vim to use xterm clipboard by default

Remembering to use the + buffer is extra work. We can make this automatic by setting the clipboard option in vim. set clipboard=unnamedplus (added in Vim 7.3.074) to use the system clipboard when using the default (unnamed) buffer. At this point, p will paste from the system clipboard. AMAZING!

iTerm2

You should ditch the default Terminal app that comes with OSX and use iTerm2 instead. You can have it do "copy on select," just as you'd expect from an Xterm, and it all ties into the work we did above. It also has some other interesting features, like native tmux support.

DISPLAY env with screen

When reconnecting to your remote screen session, you may end up with the DISPLAY variable out-of-sync. By default, I get DISPLAY=localhost:10.0 when I connect to my VM. But each connection opens a new back channel on a new port :11.0, :12.0, etc. You may need to update the value of DISPLAY inside your screen session, via export DISPLAY=localhost:10.0 with the correct DISPLAY value for this ssh connection -- check env DISPLAY outside of the screen session to get the value.

P.S.

I had some troubles testing until I realized I was expecting select-to-copy behavior in Chrome Browser under OSX. Ha! I'm glad I finally spent the 20 minutes to get all these pieces aligned.

Update

Updated to show that X users want the + buffer rather than the * buffer, after reading up on the original patch.

Update

Updated with -Y/X11ForwardTrusted information.
Updated to warn against "update pasteboard" option in OSX X11 app

Wednesday, September 12, 2012

Modify submit-type for Gerrit project via meta/config

The Submit-type of a gerrit code review project can not be changed in the UI after creation. It can be modified via the hidden meta/config branch. Any setting available to create-project can be edited this way.

Project information is not stored in the gerrit database. The information is stored directly in the git repository in a branch named 'meta/config', in two files 'project.config' and 'groups'. The values from these files are cached in the'project-list' and 'projects' caches.

Steps to make a change:

  1. set read and push permissions on refs/meta/config
  2. check out the branch,
  3. change the files,
  4. push the repo back,
  5. clear the cache.

Check out the branch:

% git fetch origin refs/meta/config:refs/remotes/origin/meta/config
% git checkout meta/config

Push back the changes:

#directly:
% git push origin meta/config:meta/config
#via review:
% git push origin meta/config:refs/for/refs/meta/config

Flush the caches:

% ssh gerrit gerrit flush-caches --cache project_list
% ssh gerrit gerrit flush-caches --cache projects

project.config

[access "refs/*"]
        owner = group MYgroup
[receive]
        requireChangeId = true
[submit]
        mergeContent = true
        action = merge always

groups

# UUID                                          Group Name
# eca2c52d733e5740a01747e71f018dcfdeadbeef      MYgroup
I found the meta/config mentioned in some posts (post post) in the repo-discuss newsgroup.

Friday, August 31, 2012

Unexpected error report from cpantesters!?

After pushing my release of App::PM::Website, I received a cpantesters failure report. Opening it revealed an error from File::Path not exporting make_path. What-the-what? Of course File::Path exports make_path, that's like all it does!
# Failed test 'no eval errors'
# at t/app-pm-website.t line 12.
# got: '"make_path" is not exported by the File::Path module

Further inspection of the report shows that this test box is using File::Path v2.04 with perl 5.10. You remember: the File::Path version released in 2007.

PREREQUISITES:
Here is a list of prerequisites you specified and versions we
managed to load:

Module Name Have Want
App::Cmd 0.318 0
App::Cmd::Command 0.318 0
App::Cmd::Tester 0.318 0
Config::YAML 1.42 0
Data::Dumper 2.121_14 0
Date::Parse 2.30 0
DateTime 0.76 0
DateTime::Format::Strptime 1.52 0
File::Path 2.04 0
File::Spec 3.33 0
...

Ok, so this was me being lazy in importing my File::Path requirement, as I clearly needed 2.07 or greater (from 2008). wow. Patched version of App::PM::Website is now released and pushed to CPAN.

I have to wonder if this is a box set to be purposefully obtuse and install the lowest version it could find that matched my requirements, just to catch this sort of thing? If so, my hat is off to you fine perler!

Wednesday, August 29, 2012

Hadoop::Streaming v0.122420 released

Hadoop::Streaming perl module v0.122420 released to CPAN today.

Yanick was kind enough to send me a patch (Pull Request #2) to clean up the formatting of the POD documentation. Thank you for the patch Yanick! I love patches and pull requests! Woot!

You know what else I love? Dist::Zilla! Thanks Ricardo! Here are the complete steps to release a new version and push it to CPAN after merging in the pull request:
git pull && dzil release

Steps accomplished by dzil release:

  1. generate new version number
  2. podweaver magic to generate structured POD
  3. boilerplate: create LICENSE, MANIFEST, META.json, META.yml, Makefile, etc.
  4. create tar file of distribution
  5. extract tar file and run all tests
  6. verify no dirty files in git
  7. update Changes file to include the new version number.
  8. git commit Changes file
  9. upload tar file to cpan
  10. git tag with version number
  11. git push to origin


Update your Perl Monger group page with App::PM::Website

Fresh CPAN Upload: my first public release of App::PM::Website!

App::PM::Website installs pm-website, a commandline tool for maintaining a perl monger website at pm.org. pm-website handles rendering the index page and updating it via WebDAV.

Monger groups get free hosting at $group.pm.org, but only static content is allowed and it can only be edited via WebDAV. I use pm-website to maintain the Los Angeles Perl Mongers website.

WebDAV is not complicated, but it can be a pain. Once I decided to automate the WebDAV upload, it was a quick hop, skip and a jump to automating the whole process of updating the front page with information about the next meeting and updating the list of past meetings.

A single yaml configuration file maintains configuration and data for the meetings, presenters and locations. Updating for a new meeting is a simple matter of adding an entry in the meetings array with the date, location and information about the presentations and presenters. pm-website build renders the page and pm-website install pushes it via WebDAV.

Get started by creating a stub configuration file with pm-website init. Check out the la.pm.org source repository on github for more inspiration.

Technical details:

App::PM::Website uses App::Cmd and is easily extensible by writing new commands in the App::PM::Website::Command::* namespace. HTTP::DAV provides WebDAV transport and Net::Netrc handles reading credentials from a standard .netrc file. Template::Toolkit is used for rendering the index template. App::PM::Website is a Dist::Zilla based distribution.

Check out the app-pm-website source at github.

Wednesday, July 25, 2012

Oscon 2012 resources, presentations and videos

OSCON 2012 recap: "Awesome, how could it not be?"

Huge! There were upwards 18 rooms in use for different talks on Wed and Thu. While that could be an opportunity to pick poorly, it really meant seeing something awesome but at the expense of other awesome things.

Unlike this year's YAPC, not all of the talks were recorded. Most of them were not. I've spent the past few hours crawling the internet to find slide decks and etc for you, so DO NOT FRET!

One stop shopping, visit the Oscon Speaker Slides page. This lists all the presentations where the speaker provided her slides to the organizers for publishing.

The OSCON 2012 YouTube Channel has the keynotes and a bunch of interviews. And it wouldn't be a conference without Piers singing beautifully. A four part video on "From Datacenter to the Cloud" is up on Vimeo.

Trawling through #oscon on twitter, I found some more slidedecks and videos.

My OSCON

View My personal Schedule to see which talks I attended. Only a few seem to have slides uploaded on the oscon/o'reilly page.

Some of my favorites:

"How to Write Compilers and Optimizers"
Holy awesome batman! Go read the slides(pdf) now.
Shevek's goal was a "thirty minute overview of a grad school course on compilers." (aka no math) And that's pretty much what we got. Bummed I missed the beginning with his SQL parser demo. Other big take away? "Use SableCC, a beautiful Java LR parser generator."

"Moose is Perl: A Guide to the New Revolution" (Ricardo Signes)
The slides aren't on the oscon page, but can be pulled directly from manxome.org: Moose is Perl slides, specifically the pdf
I attend the first half (90 minutes) before switching for Presentation Akido, toughest talk choice of the week. And the second 90 minutes was going to be about stuff I didn't know.

"Build a better team through Improv"
Got us out of our comfort zone doing improvisational comedy to improve work interactions. Fun stuff. The slides(pdf) weren't used during the activity, but have useful notes on our exercises. Take-away: "Yes, and" >> "Yes, BUT".

Profiling memory usage of Perl applications (Tim Bunce)
He posted the slides on slideshare, but not yet linked from oscon.
A lovely wander through Devel::Peek, Devel::Size and friends and what-could-be for profiling memory usage (and why it is hard).

Perl Lightning Talks and Ignite
The ignite talks were good, the lightning talks were better. No one is going to do a presentation about "phpsadness" at an ignite talk. Gong!
Tim Bunce did an enlightening lightning talk, A Space for Thought." He was kind enough to upload a copy to pastebin and send to me via twitter.

"Wrangling Logs with Logstash and ElasticSearch"
I really wanted to attend this one, by our local L.A. MediaTemple friends Nate Jones and David Castro, but it was packed beyond standing room only and the ushers weren't letting anyone in. We may get them to present again locally, for LAPM or LAdevops?

"One-man ops; deploying at scale in EC2",
A bummer that the slides don't have presenter notes. Take-away: automate it!

The Damian Conway collection:

"Instantly better VIM" (Damian Conway)
The slides aren't listed on the oscon site, but don't worry, I hooked you up: Instantly_better_Vim.tar.gz

"Presentation Akido"
I caught the first half of Moose and the second half of Presentation Akido. They were both awesome talks. Damian didn't provide a link for slides, we had nicely bound print outs in the room. Of course, there were way more people and we ran out. So they ran off extra copies. I found a stash of them on Friday during clean-up, so have a few spares.
I've missed this talk before, so was glad to be able to schedule it in.

"The Conway Channel"

"Taming Perl Regexes"
Regexp::Debugger
demo now! live interactive ncurses display showing regex application.

He also did "New Features of the Modern Perls (5.10 to 5.16)" and held an office hour.
Lots of Damian time. I was actually at the office hour, which also featured Ward Cunningham (mr XP), Shevek (compilers, parallel and distribute systems) and Christopher Clark(of sparkfun).

Hyperpolyglot:

"Inside Python"
Interesting example of digging into the python compiler/VM, with the idea of "adding a new keyword," which can't be done at the language level. That's a rough way to do meta-programming (harhar!). I left this with a better understanding of the what happens during python compile time.
"Data Science in R"
Great slide content available from the presenter's site
"Computing with Clojure"
Fun introduction to clojure and lispness. Tag teamed with two presenters. Highly entertaining.
"Storm: distributed and fault-tolerant realtime computing" (Nathan Marz)
I've been following storm and nathan_marz on twitter&blog for a long time. It was neat to see live presentation. It mostly covered the overview of storm, but did talk about a new technique for simpler, shorter code.

The Synacor Challenge:


The challenge ate up a huge number of person-hours at OSCON. I started it at 11pm Thursday and (coupled with the important lesson "tea == caffeine" ) forgot to sleep until 7am Friday. I hit an infinite loop and bailed on it -- later I found it was merely a highly bounded loop that would recover 700,000 cycles later.

What was it? Two files: a description of a VM and a binary to be run on that VM. There are 8 codes to find along the way and report back. There was a $1000 prize for the first to solve it during the convention, it was given away to one of the three people tied with 6 of 8 codes. I've gone as far as 5. I see that @dag completed all 8 sometime yesterday.

Thursday, July 12, 2012

ssh_agent + screen

Screen is awesome, ssh agent forwarding is wondrous. If you've been following along, you know this already.

You may be frustrated that when you reconnect to a screen session your agent stops working. ssh-agent sets environment variables to tell ssh where to find the agent, and your old shells get stuck with pointers to dead agent processes.

There are a few ways around this. A common method is to make a pair of scripts, one to dump the vars to a file in your login shell and another to read them back in from your screen shells. This works, but you have to do something manually after you reconnect - IN EVERY SHELL IN YOUR SCREEN SESSION.

Some will tell you to launch the agent from within the screen session, and then manually copy the env variables to each shell. This keeps a long running agent, which is nice but less secure. I like to run keep my keys local to my laptop and agent around from there. Also, when you spawn new shells, they still need to be updated with the ssh environment vars.

For years I've used a work around of using a symlink pointer for my agent. Before I launch screen, I set SSH_AUTH_SOCK to point to ~/.ssh/agent. All of my screen shells point to symlink, making it easy to update when my agent changes.

rm -f ~/.ssh/agent 
ln -s $SSH_AUTH_SOCK ~/.ssh/agent
export SSH_AUTH_SOCK=~/.ssh/agent

In practice it looks like this:
[vm53@vm53] 1003% rm -f ~/.ssh/agent; ln -s $SSH_AUTH_SOCK ~/.ssh/agent; export SSH_AUTH_SOCK=~/.ssh/agent

Meachum used to have an LD_PRELOAD hack that would pass certain ENV vars (SSH_AUTH_SOCK and DISPLAY) through to his running in-screen shells, but that was some crazy magic.

I've heard that tmux has support for updating certain env variables in the running shells ( maybe via 'update-environment'?) , specifically to handle cases like this. Anyone have details? I know that tmux is teh new h0tness but I'm not really ready to flush 20 years of screen familiarity. But maybe.

Tuesday, July 3, 2012

rabbitmq cluster problem : version_mismatch

Goal: a rabbitmq cluster ready for HA mirrored queues.
Problem: "Error: version_mismatch" during cluster command.

I went through the steps in the clustering guide and they worked fine on my freshly installed staging servers. I'm getting an error running through them on my production instance, trying to cluster a newly created EC2 instance in with my current production rabbitmq server.

Error: version_mismatch. Specifically one of them has 'topic_trie_node'.

I created the second server ( 10.7.203.145 ) from the first by creating a new volume from a snapshot of the disk from the first and then mounting the volume on a new EC2 instance.

Dear LazyWeb: Any ideas?

[andrew@ip-10-7-203-145]% sudo rabbitmqctl stop_app                               :) ~
Stopping node 'ip-10-7-203-145@ip-10-7-203-145' ...
...done.
[andrew@ip-10-7-203-145]% sudo rabbitmqctl reset                                  :) ~
Resetting node 'ip-10-7-203-145@ip-10-7-203-145' ...
...done.
[andrew@ip-10-7-203-145]% sudo rabbitmqctl cluster ip-10-7-203-145@ip-10-7-203-145 ip-10-7-203-85@ip-10-7-203-85Clustering node 'ip-10-7-203-145@ip-10-7-203-145' with ['ip-10-7-203-145@ip-10-7-203-145',
                                    'ip-10-7-203-85@ip-10-7-203-85'] ...
Error: {version_mismatch,[add_ip_to_listener,exchange_event_serial,
                          exchange_scratch,gm,ha_mirrors,mirrored_supervisor,
                          remove_user_scope,semi_durable_route,topic_trie,
                          user_admin_to_tags,add_queue_ttl,
                          multiple_routing_keys],
                         [add_queue_ttl,multiple_routing_keys,
                          add_ip_to_listener,exchange_event_serial,
                          exchange_scratch,gm,ha_mirrors,mirrored_supervisor,
                          remove_user_scope,semi_durable_route,topic_trie,
                          topic_trie_node,user_admin_to_tags]}
[andrew@ip-10-7-203-145]%                                                     2 :( ~

I agree with you prompt, sad-face indeed.

Thursday, May 31, 2012

Dzil plugins: GitHub::Meta vs GithubMeta

"Dear lazyweb: how do I add a github repository link to my dist-zilla dist.ini?"

Curtis Poe (@ovidperl) via twitter


Ovid asked how to add git repo information to a dist.ini file. Both Dist::Zilla::Plugin::GitHub::Meta and Dist::Zilla::Plugin::GithubMeta were suggested, along with the old-school Dist::Zilla::Plugin::MetaResources.


I've used MetaResources to include repository information in my dist.ini files. Which plugin should I use now: GitHub::Meta GithubMeta?

My MetaResources usage is pretty simple: homepage + repository information. The GitHub::Meta and GithubMeta versions are nearly the same from the dist.ini side, both have the same format for manually overriding the default homepage:

#MetaResources Github example:
[MetaResources]
homepage        = http://lowlevelmanager.com/
repository.web  = http://github.com/spazm/app-pm-website
repository.url  = http://github.com/spazm/app-pm-website.git
repository.type = git

## GithubMeta version
[GithubMeta]
homepage = http://lowlevelmanager.com/

## GitHub::Meta version
[GitHub::Meta]
homepage = http://lowlevelmanager.com/

There are differences if we look inside the modules. GithubMeta makes a system call to git to get remote url(s). The plugin will only run if called from within a git repository and with git in $PATH. If it finds a github.com url, it uses that as the remote. This url is then parsed to get the github user name and project name.

Conversely, GitHub::Meta shells out to git to get the github.user entry from git config via git config github.user and then makes an API call to github to get the project name. It does have an interesting feature where it can check if the checkout is a fork and use the upstream information instead.

GitHub::Meta could be nice if you want to go all-in and auto-create the repository with GitHub::Create and push updates with GitHub::Update (actually, this just seems to change the homepage url?), all of them will require the github.user configuration set in git. The docs suggest using GitHub::Create with Git::Init. I must admit, I'm behind the DZil times, as I'm not even sure where to store the default plugin options to apply when running dzil new, to actually trigger these plugins.

GithubMeta does what you'd expect and does it with a minimum of fuss. No extra options to set in your git configuration, but you do have to have the github remote added to your repo by other means. This seems a much slimmer change for existing repositories.

Wednesday, May 23, 2012

Zsh: merge stdout and stderr with |&

|& provides a zsh-only shortcut for merging STDOUT and STDERR when piping. It is the same as 2>&1 | , just a lot shorter to type. I find this useful for commands that feel like dumping --help output to STDERR.

command --help |& less

Pro Tip: Don't confuse this with &| which backgrounds the final command of the pipeline.

Find out more in the "Simple commands & pipelines" section of zshmisc:

A pipeline is either a simple command, or a sequence of two or more simple commands where each command is separated from the next by `|' or `|&'. Where commands are separated by `|', the standard output of the first command is connected to the standard input of the next. `|&' is shorthand for `2>&1 |', which connects both the standard output and the standard error of the command to the standard input of the next.

Update: |& is borrowed from csh. I don't know if it originated in csh or prior.

The standard error output may be directed through a pipe with the standard output. Simply use the form |& instead of just |.

-- man csh(1)

Friday, May 11, 2012

Zsh brace expansion and inline for loops

Two tips for today: zsh brace expansions [1] [2] and short for loops.

Brace Expansion:
{x..y} will expand to the integers x,x+1,...y. Prepend the numbers with padding for padded output. This will work in bash as well.


echo {3..5}     # 3 4 5
echo {3..100}   # 3 4 5 ... 10 11 12 ... 98 99 100
echo {03..100}  # 03 04 05 ... 10 11 12 ... 98 99 100
echo {003..100} # 003 004 005 ... 010 011 012 ... 098 099 100

Up until now, I've used seq to create integer lists like this. Using seq in commands requires a subshell. Do you know how to zero-pad the output of seq?
echo $(seq 3 5)   # 3 4 5
echo $(seq -w 3 5) # 3 4 5
echo $(seq -w 8 10) # 08 09 10

But today is about moving beyond seq. Let's move on to for loops.

For Loops:

% for i in {3..5}; do echo $i; done
3
4
5

Zsh has two shortened forms of this standard for loop. The short forms only support a single command. If you really want multiple commands, you can put them in {}, but then you might as well use "do ... done" in that case. These won't work in bash (woo!).

I never remember this when I need it and can't find references to this zsh specific "single line forloop" -- but it's right there in zshmisc,"Alternate Forms For Complex Commands."

for ... in list; comand
for ... (list) command




# short form with "in" and ";"
% for i in {3..5}; echo "line $i"; date
line 3
line 4
line 5
Fri May 11 15:52:32 PDT 2012

# short form with parens:
% for i ({3..5}) echo "line $i"; date
line 3
line 4
line 5
Fri May 11 15:52:32 PDT 2012

# short form with multiple commands in {}
% for i in {3..5}; { echo "line $i"; date }
line 3
Fri May 11 15:52:43 PDT 2012
line 4
Fri May 11 15:52:43 PDT 2012
line 5
Fri May 11 15:52:43 PDT 2012

Complex Commands
for name ... [ in word ... ] term do list done

where term is at least one newline or ;. Expand the list of words, and set the parameter name to each of them in turn, executing list each time. If the in word is omitted, use the positional parameters instead of the words.

More than one parameter name can appear before the list of words. If N names are given, then on each execution of the loop the next N words are assigned to the corresponding parameters. If there are more names than remaining words, the remaining parameters are each set to the empty string. Execution of the loop ends when there is no remaining word to assign to the first name. It is only possible for in to appear as the first name in the list, else it will be treated as marking the end of the list.

Alternate Forms For Complex Commands

for name ... ( word ... ) sublist

A short form of for.

for name ... [ in word ... ] term sublist

where term is at least one newline or ;. Another short form of for.


Zsh history expansion

You can refer back to prior elements of your command history, modify them, and make new commands. With Zsh expansion, history expansions can be expanded in-line before executing! Start by trying to learn a couple entries and slowly expand while building up your memory of useful mappings.

!$ and !* to reference args from prior commands

I've been using !$ (bang-last) for a while now. It expands to the last arg of the previous command. Use this when you are taking multiple actions with the same argument:

    % touch newfile
    % chmod a+x !$
    % vim !$
I'm now working !* into my command-line-fu, which pulls all the args from the previous command rather than just the last.

History Expansion:

! initiates history expansion, pulling items from your previous commands. The most commonly known/used abbreviation being !! to repeat the last command in full. Similarly !n repeats item n from your history and !-n goes back n commands. !-1 is synonymous with !!. !str is the most recent command to start with str. !# is current command line up to this point.

"Word Designators" can be applied to the selected history item, as we do above with $ and * . 0 is the command, n is the nth argument, x-y is args x through y. x* is like x-$ while x- is like x-$ excluding the last item.

I haven't even gotten started with the modifiers, like G for global on ^search^replace, a and A for prepending the current dir and crazy others.

Examples:

!!                # last command
!$                # final word of prior command
!*                # all words of prior command (excluding the command itself)
!3                # third command from history
!-3:*             # all words from three commands back 
^foo^bar          #last command, with the first substring foo replaced by bar
!-2:s^foo^bar     #full command from 2 back, with the first foo replaced by bar
!-2:s^foo^bar^:G  #full command from 2 back, with all "foo"s replaced by "bar"s

man zshexpn for all the details.
History Expansion

History expansion allows you to use words from previous command lines in the command line you are typing. This simplifies spelling corrections and the repetition of complicated commands or arguments. Immediately before execution, each command is saved in the history list, the size of which is controlled by the HISTSIZE parameter. The one most recent command is always retained in any case. Each saved command in the history list is called a history event and is assigned a number, beginning with 1 (one) when the shell starts up. The history number that you may see in your prompt (see EXPANSION OF PROMPT SEQUENCES in zshmisc(1)) is the number that is to be assigned to the next command.

Overview

A history expansion begins with the first character of the histchars parameter, which is '!' by default, and may occur anywhere on the command line; history expansions do not nest. The '!' can be escaped with '\' or can be enclosed between a pair of single quotes ('') to suppress its special meaning. Double quotes will not work for this. Following this history character is an optional event designator (see the section 'Event Designators') and then an optional word designator (the section 'Word Designators'); if neither of these designators is present, no history expansion occurs.
Input lines containing history expansions are echoed after being expanded, but before any other expansions take place and before the command is executed. It is this expanded form that is recorded as the history event for later references.

By default, a history reference with no event designator refers to the same event as any preceding history reference on that command line; if it is the only history reference in a command, it refers to the previous command. However, if the option CSH_JUNKIE_HISTORY is set, then every history reference with no event specification always refers to the previous command.

For example, '!' is the event designator for the previous command, so '!!:1' always refers to the first word of the previous command, and '!!$' always refers to the last word of the previous command. With CSH_JUNKIE_HISTORY set, then '!:1' and '!$' function in the same manner as '!!:1' and '!!$', respectively. Conversely, if CSH_JUNKIE_HISTORY is unset, then '!:1' and '!$' refer to the first and last words, respectively, of the same event referenced by the nearest other history reference preceding them on the current command line, or to the previous command if there is no preceding reference.

The character sequence '^foo^bar' (where '^' is actually the second character of the histchars parameter) repeats the last command, replacing the string foo with bar. More precisely, the sequence '^foo^bar^' is synonymous with '!!:s^foo^bar^', hence other modifiers (see the section 'Modifiers') may follow the final '^'. In particular, '^foo^bar:G' performs a global substitution.

If the shell encounters the character sequence '!"' in the input, the history mechanism is temporarily disabled until the current list (see zshmisc(1)) is fully parsed. The '!"' is removed from the input, and any subsequent '!' characters have no special significance.

A less convenient but more comprehensible form of command history support is provided by the fc builtin.

Event Designators

An event designator is a reference to a command-line entry in the history list. In the list below, remember that the initial '!' in each item may be changed to another character by setting the histchars parameter.
!
Start a history expansion, except when followed by a blank, newline, '=' or '('. If followed immediately by a word designator (see the section 'Word Designators'), this forms a history reference with no event designator (see the section 'Overview').

!!

Refer to the previous command. By itself, this expansion repeats the previous command.

!n

Refer to command-line n.

!-n

Refer to the current command-line minus n.

!str

Refer to the most recent command starting with str.

!?str[?]
Refer to the most recent command containing str. The trailing '?' is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str.
!#
Refer to the current command line typed in so far. The line is treated as if it were complete up to and including the word before the one with the '!#' reference.

!{...}

Insulate a history reference from adjacent characters (if necessary).

Word Designators

A word designator indicates which word or words of a given command line are to be included in a history reference. A ':' usually separates the event specification from the word designator. It may be omitted only if the word designator begins with a '^', '$', '*', '-' or '%'. Word designators include:
0
The first input word (command).

n

The nth argument.

^

The first argument. That is, 1.

$

The last argument.

%

The word matched by (the most recent) ?str search.

x-y

A range of words; x defaults to 0.

*

All the arguments, or a null value if there are none.

x*

Abbreviates 'x-$'.

x-

Like 'x*' but omitting word $.

Note that a '%' word designator works only when used in one of '!%', '!:%' or '!?str?:%', and only when used after a !? expansion (possibly in an earlier command). Anything else results in an error, although the error may not be the most obvious one.
Modifiers

After the optional word designator, you can add a sequence of one or more of the following modifiers, each preceded by a ':'. These modifiers also work on the result of filename generation and parameter expansion, except where noted.
a
Turn a file name into an absolute path: prepends the current directory, if necessary, and resolves any use of '..' and '.' in the path. Note that the transformation takes place even if the file or any intervening directories do not exist.

A

As 'a', but also resolve use of symbolic links where possible. Note that resolution of '..' occurs before resolution of symbolic links. This call is equivalent to a unless your system has the realpath system call (modern systems do).

c

Resolve a command name into an absolute path by searching the command path given by the PATH variable. This does not work for commands containing directory parts. Note also that this does not usually work as a glob qualifier unless a file of the same name is found in the current directory.

e

Remove all but the extension.

h

Remove a trailing pathname component, leaving the head. This works like 'dirname'.

l

Convert the words to all lowercase.

p

Print the new command but do not execute it. Only works with history expansion.

q

Quote the substituted words, escaping further substitutions. Works with history expansion and parameter expansion, though for parameters it is only useful if the resulting text is to be re-evaluated such as by eval.

Q

Remove one level of quotes from the substituted words.

r

Remove a filename extension of the form '.xxx', leaving the root name.

s/l/r[/]
Substitute r for l as described below. The substitution is done only for the first string that matches l. For arrays and for filename generation, this applies to each word of the expanded text. See below for further notes on substitutions.
The forms 'gs/l/r' and 's/l/r/:G' perform global substitution, i.e. substitute every occurrence of r for l. Note that the g or :G must appear in exactly the position shown.
&
Repeat the previous s substitution. Like s, may be preceded immediately by a g. In parameter expansion the & must appear inside braces, and in filename generation it must be quoted with a backslash.

t

Remove all leading pathname components, leaving the tail. This works like 'basename'.

u

Convert the words to all uppercase.

x

Like q, but break into words at whitespace. Does not work with parameter expansion.

Monday, May 7, 2012

debugging nagios remote nrpe commands

Nagios NRPE debugging steps:
  1. run the command manually on the target host
  2. enable debugging in nrpe.cfg and watch syslog
  3. dig in deeper with debug jobs.
Debugging nagios remote nrpe commands can feel very opaque. Normally I find my issue in step 1 of the debug escalation. Today I had to hit all three steps while debugging a test that wrapped around s3cmd. I eventually found that HOME is not set in the environment used to run nrpe commands.
check_nrpe -H 10.7.202.92 -c check_ui_s3_backup                                 
NRPE: Unable to read output
I run my check remotely and receive the dreaded general error "unable to read output." This means the script failed to run and didn't produce any output to STDOUT. STDERR seems to be ignored, even with logging enabled.

Step 1a: go to the server and verify the command being run by nrpe.

[andrew@ip-10-7-202-92]% grep check_ui_s3_backup /etc/nagios/nrpe.d/herbie.cfg
command[check_ui_s3_backup]=HOME=~postgres /usr/lib/nagios/plugins/herbie/check_ui_s3_backup
Step 1b: run the command manually. Here I find that the script fails if I don't have the config file:
[andrew@ip-10-7-202-92]% /usr/lib/nagios/plugins/herbie/check_ui_s3_backup
ERROR: /home/andrew/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
That should be a simple fix. Find the user running the nrpe command and give them a .s3cfg. Easy-Peasy.
cp .s3cfg ~nagios/
sudo -u nagios -H /usr/lib/nagios/plugins/herbie/check_ui_s3_backup
OK - Last backup 0 days ago.
Ok, it works locally. Recheck it remotely. It fails?!!?! This is where we start the gnashing of teeth and pulling of hair
[andrew@ip-10-7-203-10]% check_nrpe -H 10.7.202.92 -c check_ui_s3_backup
NRPE: Unable to read output
Step 2: Enable logging in nrpe.cfg, run the remote check and inspect the logs. Surprise, nothing useful.
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Connection from 10.7.203.10 port 15286
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Host address is in allowed_hosts
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Handling the connection...
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Host is asking for command 'check_ui_s3_backup' to be run...
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Running command: /usr/lib/nagios/plugins/herbie/check_ui_s3_backup
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Command completed with return code 3 and output: 
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Return Code: 3, Output: NRPE: Unable to read output
May  7 19:08:29 ip-10-7-202-92 nrpe[17159]: Connection from 10.7.203.10 closed.
Step 3: debug jobs (aka printf aka "hail marry" debugging). I create two new nrpe entries and restart nagios-nrpe-server. The first will show me the user running the command and the second will show the environment, using whoami and env respectively.
command[check_ui_test]=whoami
command[check_ui_test2]=env
[andrew@ip-10-7-203-10]% check_nrpe -H 10.7.202.92 -c check_ui_test
nagios
[andrew@ip-10-7-203-10]% check_nrpe -H 10.7.202.92 -c check_ui_test2
NRPE_PROGRAMVERSION=2.12
TERM=screen-256color-bce
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
LANG=en_US.UTF-8
NRPE_MULTILINESUPPORT=1
PWD=/
Yep, the tests are running as the expected user, nagios.

Holy Schmoly! Look at that environment! HOME is not set. Simple enough to fix for my check, but wow was I not expecting that. Also useful to note the minimal PATH and that the working directory is /.

Update check to explicitly set HOME in the environment and restart nrpe:

command[check_ui_s3_backup]=HOME=~nagios /usr/lib/nagios/plugins/herbie/check_ui_s3_backup
Restart nrpe:
[andrew@ip-10-7-202-92]% sudo service nagios-nrpe-server restart
 * Stopping nagios-nrpe nagios-nrpe
   ...done.
 * Starting nagios-nrpe nagios-nrpe
   ...done.
Check:
[andrew@ip-10-7-203-10]% check_nrpe -H 10.7.202.92 -c check_ui_s3_backup
OK - Last backup 0 days ago.

Tuesday, April 24, 2012

zsh history: extend and persist


Zsh can do so much awesome stuff with your history. Yet by default it doesn't save to file and only stores 30 items.
A history mechanism for retrieving previously typed lines (most simply with the Up or Down arrow keys) is available; note that, unlike other shells, zsh will not save these lines when the shell exits unless you set appropriate variables, and the number of history lines retained by default is quite small (30 lines). See the description of the shell variables (referred to in the documentation as parameters) HISTFILE, HISTSIZE and SAVEHIST in zshparam(1).
Zsh can do so much if you ask it: save lots of lines, store them on exit, merge multiple concurrent histories on save, store and merge incrementally! A dizzying array of options.

Bottom Line Up Front: Having done the following research, I'm adding some new options to my .zshrc:
# .zshrc
## History
HISTFILE=$HOME/.zhistory       # enable history saving on shell exit
setopt APPEND_HISTORY          # append rather than overwrite history file.
HISTSIZE=1200                  # lines of history to maintain memory
SAVEHIST=1000                  # lines of history to maintain in history file.
setopt HIST_EXPIRE_DUPS_FIRST  # allow dups, but expire old ones when I hit HISTSIZE
setopt EXTENDED_HISTORY        # save timestamp and runtime information

Let's start digging! Checking the zshparam manpage we see:
HISTFILE
The file to save the history in when an interactive shell exits. If unset, the history is not saved.
HISTSIZE <S>
The maximum number of events stored in the internal history list. If you use the HIST_EXPIRE_DUPS_FIRST option, setting this value larger than the SAVEHIST size will give you the difference as a cushion for saving duplicated history events.
SAVEHIST
The maximum number of history events to save in the history file.

As we continue digging, we find some of the other exciting options by looking at the HISTORY section of the zshoptions manpage: APPEND_HISTORY, EXTENDED_HISTORY, SHARE_HISTORY, INC_APPEND_HISTORY, HIST_SAVE_NO_DUPS, HIST_SAVE_BY_COPY, HIST_REDUCE_BLANKS, .

Once you start merging histories (APPEND_HISTORY, INC_APPEND_HISTORY or SHARE_HISTORY), you'll want to limit the duplications ( HIST_EXPIRE_DUPS_FIRST, HIST_IGNORE_DUPS, HIST_IGNORE_ALL_DUPS, HIST_SAVE_NO_DUPS).

I found INC_APPEND_HISTORY to be too confusing, merging all my current shells into one shared history, live as I go. I want !! or up-arrow to repeat the last command in THIS VERY SHELL.

APPEND_HISTORY
If this is set, zsh sessions will append their history list to the
history file, rather than replace it. Thus, multiple parallel zsh ses‐
sions will all have the new entries from their history lists added to
the history file, in the order that they exit. The file will still be
periodically re-written to trim it when the number of lines grows 20%
beyond the value specified by $SAVEHIST (see also the
HIST_SAVE_BY_COPY option).
INC_APPEND_HISTORY
This options works like APPEND_HISTORY except that new history lines
are added to the $HISTFILE incrementally (as soon as they are
entered), rather than waiting until the shell exits. The file will
still be periodically re-written to trim it when the number of lines
grows 20% beyond the value specified by $SAVEHIST (see also the
HIST_SAVE_BY_COPY option).
EXTENDED_HISTORY
Save each command's beginning timestamp (in seconds since the epoch)
and the duration (in seconds) to the history file. The format of this
prefixed data is:

`: :;'.
SHARE_HISTORY
This option both imports new commands from the history file, and also
causes your typed commands to be appended to the history file (the
latter is like specifying INC_APPEND_HISTORY). The history lines are
also output with timestamps ala EXTENDED_HISTORY (which makes it eas‐
ier to find the spot where we left off reading the file after it gets
re-written).
HIST_EXPIRE_DUPS_FIRST
If the internal history needs to be trimmed to add the current command
line, setting this option will cause the oldest history event that has
a duplicate to be lost before losing a unique event from the list.
You should be sure to set the value of HISTSIZE to a larger number
than SAVEHIST in order to give you some room for the duplicated
events, otherwise this option will behave just like
HIST_IGNORE_ALL_DUPS once the history fills up with unique events.
HIST_IGNORE_ALL_DUPS
If a new command line being added to the history list duplicates an
older one, the older command is removed from the list (even if it is
not the previous event).
HIST_IGNORE_DUPS (-h)
Do not enter command lines into the history list if they are dupli‐
cates of the previous event.
HIST_SAVE_NO_DUPS
When writing out the history file, older commands that duplicate newer
ones are omitted.

Data-Driven Documents with d3.js and cubism.js

Cubism.js: "A D3 plugin for visualizing time-series data".

Cubism can pull stats from graphite to produce pretty dashboard monitors. It is built on top of d3.js : Data-Driven Documents.

We're slowing putting stats into graphite at work. It'd be great fun to find the time to play with something like this , even better for someone to build me something shiny. Any takers? :)

Monday, April 16, 2012

remove a file from the most recent git commit with 'git reset'

How to quickly and easily remove a file from your most recent git commit while maintaing the changes in the working directory. Use 'git reset HEAD^' followed by 'git commit --amend'.

Caveats: Don't modify your commits if they've been pushed upstream or shared (unless you know why and how and ... no, just don't).

git reset HEAD^ -- $file
git commit --amend

Notes:

  1. This technique works nicely if you need to modify a previous commit by wrapping it inside a 'git rebase -i ...' and then 'edit'ing the commit you want.
  2. My right-hand prompt shows the VCS system (git), branch and git status. zsh's vcs_info is AWESOME.
  3. Yes, I do have a smiley face in my prompt. It goes from a green happy face to a red sad face if the last command failed.

Short Example:

% git add a b c
% git commit -m "adds a and c"
# realize I didn't mean to include b!
% git reset HEAD^ -- b
% git commit --amend

Full Example:

[vm53@vm53]% git status                                 :) (git)-[master] ~/src/bar
# On branch master
nothing to commit (working directory clean)
[vm53@vm53]% echo "a" > a; echo "b">b; echo "c">c       :) (git)-[master] ~/src/bar
[vm53@vm53]% git status                                 :) (git)-[master] ~/src/bar
# On branch master
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#
#       a
#       b
#       c
nothing added to commit but untracked files present (use "git add" to track)
[vm53@vm53]% git add a b c                              :) (git)-[master] ~/src/bar
[vm53@vm53]% git commit                                 :) (git)-[master] ~/src/bar
### editor opens with the old git message, modify if desired
[master 18d73bd] adds a and c
 3 files changed, 3 insertions(+), 0 deletions(-)
 create mode 100644 a
 create mode 100644 b
 create mode 100644 c
[vm53@vm53]% git status                                 :) (git)-[master] ~/src/bar
# On branch master  
nothing to commit (working directory clean)
[vm53@vm53]% git reset HEAD^ -- b                       :) (git)-[master] ~/src/bar
[vm53@vm53]% git status                                 :) (git)-[master] ~/src/bar
# On branch master
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
#       deleted:    b
#
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#
#       b
[vm53@vm53]% git commit --amend -m "adds a and c"       :) (git)-[master] ~/src/bar
[master c15220b] adds a and c
 2 files changed, 2 insertions(+), 0 deletions(-)
 create mode 100644 a
 create mode 100644 c
[vm53@vm53]% git status                                 :) (git)-[master] ~/src/bar
# On branch master
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#
#       b
nothing added to commit but untracked files present (use "git add" to track)
[vm53@vm53]%                                            :) (git)-[master] ~/src/bar

Thursday, April 5, 2012

Email for the High D (DiSC model)

What does a High D want in her email?
Direct. No salutation, no contact info. Bottom line up front. Short: 4 paragraphs of 4 sentences is an upper bound, one to two sentences preferred. Actionable. No story, no background. NO ATTACHMENTS unless you are specifically sending them something they asked for. BEAM INFORMATION INTO YOUR HEAD.

A high S/high C would call the same email too short, underspecified and cold/angry.

You've got a huge corpus to train from, read your boss's emails and reply using her style. Improves communication (ie, the actual ability to get your thought into their head) and appreciation.

Email and the High D from the Career Tools podcast from the Manager Tools guys.

See also Email and the High I. Looking forward to "Email and the High C" and "Email and the High S," I hope they get recorded and shared.

Tuesday, April 3, 2012

More Storm Videos

Six videos covering an hour-and-a-half of Nathan Marz and Ted Dunning talking about Storm at SF DataMining.

Wednesday, March 28, 2012

Smile! with a Zsh prompt happy/sad face

UPDATE: Now in color!
%(?,%F{green}:%),%F{yellow}%? %F{red}:()%f

% RPROMPT='%(?,%F{green}:%),%F{yellow}%? %F{red}:()%f'
% /bin/true                                                                     :)
% /bin/false                                                                    :)
%                                                                             1 :(
Wouldn't you like a more descriptive shell prompt, that would show you the return value of the last command in a visually intuitive way? Sure, you could use %? to get the return int, zero for success and non-zero for failure. But that's just going to make your prompt more bizarre to your (pair-programming) partner.

Add this expansion sequence %(?.:%).:() to your PROMPT or RPOMPT to add a smiley/frowny face to your zsh prompt based on the return status of the previous command.

Let's see it in action! "Before" vs "After" of putting this in my RPROMPT to set the right prompt, to replace the raw %? return value I had previously.

#BEFORE
% RPROMPT='%? %~'                                                0 ~
% /bin/true                                                      0 ~
% /bin/false                                                     0 ~
%                                                                1 ~

#AFTER
% RPROMPT='%(?,:%),:() %~'                                       1 ~
% /bin/true                                                     :) ~
% /bin/false                                                    :) ~
%                                                               :( ~
Right, you're wondering how you'll live not knowing the actual failure int return code, right? Well, you don't have to give that up. Let's modify the false-text to prepend the failure int %? to the sad face:
#Add this to your PROMPT or RPROMPT: %(?,:%),%? :()

[vm53@vm53]% RPROMPT='%(?,:%),%? :() %~'                                   :) ~
[vm53@vm53]% /bin/true                                                     :) ~
[vm53@vm53]% /bin/false                                                    :) ~
[vm53@vm53]%                                                             1 :( ~

How does this work? We use the built-in conditional form, %(x.true-text.false-text), as documented in the zsh man pages under "CONDITIONAL SUBSTRINGS IN PROMPTS". Closing parenthesis must be encoded to appear in the true-text or false-text. ? checks against the return code of the previous command. Our true text makes a happy face :) and our false-text makes a sad face :(. Our updated sad face prepends the return value%? :(. I chose to prepend as the is the first item in my right prompt, this makes the smiley/frowny faces align.

Inspired by Selena Deckelmann's Use This Interview. Thanks Clark for showing me the post and then putting up the challenge to do it in zsh (and do it more cleanly).

CONDITIONAL SUBSTRINGS IN PROMPTS:

%(x.true-text.false-text)
              Specifies a ternary expression.  The character following  the  x
              is  arbitrary;  the  same character is used to separate the text
              for the `true' result from that for the  `false'  result.   This
              separator  may  not appear in the true-text, except as part of a
              %-escape sequence.  A `)' may appear in the false-text as  `%)'.
              true-text  and  false-text  may  both contain arbitrarily-nested
              escape sequences, including further ternary expressions.

              The left parenthesis may be preceded or followed by  a  positive
              integer  n,  which defaults to zero.  A negative integer will be
              multiplied by -1.  The test character x may be any of  the  fol‐
              lowing:

              !      True if the shell is running with privileges.
              #      True if the effective uid of the current process is n.
              ?      True if the exit status of the last command was n.
              _      True if at least n shell constructs were started.
              C
              /      True if the current absolute path has at least n elements
                     relative to the root directory, hence / is counted  as  0
                     elements.
              c
              .
              ~      True if the current path, with prefix replacement, has at
                     least n elements relative to the root directory, hence  /
                     is counted as 0 elements.
              D      True if the month is equal to n (January = 0).
              d      True if the day of the month is equal to n.
              g      True if the effective gid of the current process is n.
              j      True if the number of jobs is at least n.
              L      True if the SHLVL parameter is at least n.
              l      True  if  at least n characters have already been printed
                     on the current line.
              S      True if the SECONDS parameter is at least n.
              T      True if the time in hours is equal to n.
              t      True if the time in minutes is equal to n.
              v      True if the array psvar has at least n elements.
              V      True  if  element  n  of  the  array  psvar  is  set  and
                     non-empty.
              w      True if the day of the week is equal to n (Sunday = 0).

Tuesday, January 17, 2012

The Grail of Efficiency : premature optimization

Premature optimization is the root of all evil ... most of the time. You're only going to know that it's time to optimize after you've built. This doesn't get us off the hook for building slow systems nor for adding nonessential complexity into our solutions.

Build it first, then measure, then improve. And PS. you're bad at guessing what's wrong.

I like this longer version of knuth's quotation (though he claims he didn't coin the phrase) than the one that shows up on the wikipedia page on program optimization.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.
-- Donald Knuth, ACM: Structured programing with go to statements

Wednesday, January 11, 2012

"Big Data" book by Nathan Marz

Big Data: Principle and best practices of scalable realtime data systems by Nathan Marz and Samuel E. Ritchie is now available in MEAP/Roughcut edition from Manning.

The early access edition of my book "Big Data" is now available. Use code bd50 for 50% off http://www.manning.com/marz/
---- @nathanmarz on twitter (link)
Questions about the book? Ask me here: news.ycombinator.com/item?id=3444300
---- @nathanmarz on twitter (link)

Nathan tweeted a discount code 'bd50' for 50% off, which is a nice bonus. I ordered a ebook+print bundle my copy on Monday. Looking forward to digging in and updating you all with a review. Excited to read the Appendix on storm.

Table of Contents:

  1. A new paradigm for Big Data - FREE
  2. Data model for Big Data - AVAILABLE
  3. Data storage on the batch layer
  4. MapReduce and batch processing
  5. Batch processing with Cascading
  6. Basics of the serving layer
  7. Storm and the speed layer
  8. Incremental batch processing
  9. Layered architecture in-depth
  10. Piping the system together
  11. Future of NoSQL and Big Data processing
    • Appendix A: Hadoop
    • Appendix B: Thrift
    • Appendix C: Storm

Tuesday, January 10, 2012

Building a jabber bot with Bot::Backend

There are many ways to write a (jabber/xmpp) chat bot. A quick search for "Jabber Bot" turns up Net::Jabber::Bot, Bot::JabberBot, Bot::Backbone::Service::JabberChat, Bot::Jabbot, IM::Engine, AnyEvent::XMPP and more. You'll see even more if you search for XMPP instead.

How to choose?

I started with Net::Jabber::Bot, but ran into problems connecting to google-talk based chat. Jabber modules built on AnyEvent::XMPP (seem to) connect to google-auth better than those built on Net::XMPP. This is moot now that I have a normal jabber server, but it still tripped me up. The pod doc lays out funny, so I patched and filed a change-request at github.

Knowing that I would eventually want an event-loop based app, I focused on AnyEvent::XMPP packages. Bot::Backbone and IM::Engine are both interesting abstractions. IM::Engine is quote "currently alpha quality with serious features missing and is rife with horrible bugs." Perhaps I should be happy that he admits this up front? On to Bot::Backbone and sugary Moose!

Once I realized that the group_domain wouldn't be automatically filled in as "conference.$domain" in Bot::Backbone::Service::JabberChat, I was able to connect to a group chat room on my jabber server!

Follow along with my bot, App::Sulla on github. Thus far I have a working connection that responds to "!time" requests -- I've even fixed the part where it threw warnings on all other messages!

#this code in my dispatch table:
    also not_command not_to_me run_this {
            my ( $self, $message ) = @_;
            respond { "hello world" };
    };

# lead to the following error on every other chat message:
unhandled callback exception on event (message, AnyEvent::XMPP::Ext::MUC=HASH(0x3746818), AnyEvent::XMPP::Ext::MUC::Room=HASH(0x3a6d700) ANY_STRING ): Can't call method "add_predicate_or_return" on an undefined value at perl5/lib/perl5/Bot/Backbone/DispatchSugar.pm line 124.

I decided I wanted to pass the auth and channel information into App::Sulla as parameters to the constructor. This didn't play very well with the service sugar. service executes at compile time and squirrels away the configuration hash for the service into services. I added a before modifier to construct_services which is called during run just before initializing each service.

Monday, January 2, 2012

Storm is upon us!

A storm (from Proto-Germanic *sturmaz "noise, tumult") is any disturbed state of an astronomical body's atmosphere, especially affecting its surface, and strongly implying severe weather.
-- wikipedia:storm

Storm is open-source: distributed and fault-tolerant realtime computation
-- Nathan Marz via twitter
Twitter has open sourced Storm, a distributed real-time processing engine ( Announcement, Github ). Hadoop is to batch as Storm is to stream. This is huge. So huge that it seems to be mostly ignored?!

Storm's primary author, Nathan Marz, is the same guy who created cascalog for hadoop queries (a marriage of Clojure and Datalog running queries in hadoop) and ElephantDB. You'll remember he was working at BackType (their first non-founder hire. Good choice, guys!). Twitter bought BackType and now they are sharing storm with us. Thanks twitter!

Since Storm's release seems to have flown in under the radar and is a difficult generic search term, I've collected up links to the relevant resources.

Resources & Announcements:

StrangeLoop/Infoq presentation
http://www.infoq.com/presentations/Storm
Nathan opensourced storm in the middle of his StrangeLoop 2011 presentation.
The slides below the image update as the video plays.
Storm Slides
http://www.slideshare.net/nathanmarz/storm-distributed-and-faulttolerant-realtime-computation
StrangeLoop 2011 video links
https://thestrangeloop.com/news/strange-loop-2011-video-schedule
A Storm is Coming more details from twitter.
http://engineering.twitter.com/2011/08/storm-is-coming-more-details-and-plans.html
Storm Source on Github
https://github.com/nathanmarz/storm
Storm Wiki on Github
Wiki Front Page
Rationale
Tutorial
Mailing List
http://groups.google.com/group/storm-user
0.6.1 released
http://groups.google.com/group/storm-user/browse_thread/thread/72b3d4a4aebdebea
Testing Storm Topologies(in Clojure)
http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html
Overview of Storm Presentation at BashoChats 001
http://basho.com/blog/technical/2011/12/20/Basho-Chats-001-Talk-Videos/
BackType techtalks
http://tech.backtype.com/pages/presentations-8