Wednesday, July 8, 2015

Effective Git messages and history inspection

Embedded below is my presentation from YAPC.na 2015 on Effective Git: better commits via inspecting history and code archeology.

I showed the elements of an effective commit message, why they're useful during inspection of the code, and how to coerce your rough draft feature branch into a production ready artifact.

The slides in the video are washed out, so follow along with the Slides (pdf)

From the talk description:

Harness the power of Version Control to view a project’s evolution over time. We have the luxury of moving forward and backwards through the history of our projects, viewing changes through time and reading sign posts along the journey. Experience reading commit messages will prove how useful they are at sharing the mental model behind the code. Reading historical commit messages and viewing diffs improves our ability to document and stage our own commits. Commits are not write-only! They are messages from the past that tell us about our present.

I’ll show you the tools I use for diving into a new code base and how I interact with my current projects on a daily basis. I’ll show how I answer the questions that come up when reading and debugging code. I’ll show you how I stage and rebase my commits to make a readable history. You’re keystrokes away from pivoting from code to annotation to arbitrary diffs then cross-corelate commit messages with your ticketing system.

Wednesday, April 22, 2015

Renew expiring GeoTrust HTTPS/SSL certificate in Amazon AWS for S3 and CloudFront

Key Insight

AWS doesn't let you modify the key for server-credentials, forcing you to create new ones and then update CloudFront(CF) and Elastic Load Balancer(ELB) configurations to use the new cert.

My corporate https/ssl certificate is expiring. I need to renew it and get it pushed to AWS IAM for use in S3 and CloudFront. If you're in the same boat, I hope these instructions help you out.

PS. Hi Future me, I'll see you in about a year when this round of certs expires.

Materials Needed:

  1. CSR and private key file.
    1. The current set is preferred.
    2. If you don't have the original files, you can create a new pair.
    3. If you are changing the CSR, your certificate authority may need to spend time re-validating you.
  2. account & password to your certificate authority.
  3. aws credentials and access to modify IAM certificates
  4. aws command line tools installed.

Basic Steps:

  1. Renew the certificate:
    1. Connect to certificate authority.  For me this is GeoTrust
    2. Click the big [renew]  button by your current certificate.  
      1. pick the new certificate term,  
      2. confirm admin and billing contacts
      3. update the CSR for confirmation
      4. pay.
      5. wait for confirmation
  2. Download and prep the certificate files:
    1. Download the certificate bundle.  Choose type "other" which will provide a zipped bundle of files. Unzip and enter the directory.
    2. crossRootCA.cer
      getting_started.txt
      IntermediateCA.cer
      ssl_certificate.cer
    3. Create a certificate bundle from the root and intermediate file:
    4. cat IntermediateCA.cer crossRootCA.cer > geotrust-chain.pem
    5. Copy the original secure key to the local dir.  For me this is company.rsa.key.  This must be a RSA key in x509 format.
    6. cp secret_files/company.rsa.key ./
  3. Create a new AWS IAM server-certificate.
    1. AWS doesn't support modifying the keyfile in existing server-certificates, we need to create new ones.
    2. CloudFront requires a separate server-certificate with a path starting with 'cloudfront/', so we'll upload the key twice to create two server-c
    3. aws iam upload-server-certificate \
      --server-certificate-name company-test \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /
      aws iam upload-server-certificate \
      --server-certificate-name company-test-cf \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /cloudfront/
  4. Update AWS to use the new server-certificates
    1. Cloudfront:
      1. For each CloudFront distribution using the expiring server-certificate: 
        1. In the console: Console -> CloudFront -> Distribution Name -> [General] -> [Edit] 
        2. Then choose the new certificate from the drop-down.
    2. ELB:
      1. Console -> EC2 -> (pick region) -> Load Balancers
      2. For each load balancer that uses HTTPS with the old cert:
        1. right-click -> 'edit listeners'
        2. Use the "change" link in the SSL Certificate column.
          1. Certificate Type: Choose an existing certificate
          2. Certificate Name: choose new certiicate from the drop-down
Today I learned about and used the aws iam *-server-certificate* commands. Next steps would be bypassing the console and automating detection and updates of ELB and CF entries.

Links

Sunday, February 8, 2015

haskell on centos 6.5

Use justhub rather than version in epel repo.

Don't bother with the version of haskell-platform in epel repo. It is sufficiently out-of-date (circa 2010) that it can't update via cabal install cabal-install. Jump straight to using justhub.

Justhub example for centos 6.x:

# install the justhub yum repo:
sudo rpm -ivh http://sherkin.justhub.org/el6/RPMS/x86_64/justhub-release-2.0-4.0.el6.x86_64.rpm

# install single current haskell version into /usr/bin
yum install haskell

# update cabal
cabal update

# e.g. install some packages via cabal
cabal install haskell-src-exts
Now I can get back to coding for exercism.io. Come review my first haskell program.

Thursday, July 3, 2014

Monitorama Conference

I attended the second Monitorama Conference last month in Portland and the first last year in Boston. It’s been a privilege and joy to watch the journey as the conference coalesced from twitter gripes to discussions to international happening.Andrew
“An Open Source Monitoring Conference & Hackathon”, Monitorama is focused on embracing open source and improving monitoring to improve the lives of folks in development(devs) and Operations(ops). Monitors are the tools we use to watch over our computers (and websites) and make sure they are running as expected. Monitorama is quickly becoming one of my favorite conferences, alongside Scale, OSCON and YAPC. These all share a theme of grass roots Open Source development and organization. I love the Open Source tenets of sharing, improvement and experimentation.
Twitter griping about monitoring lead to the twitter hashtag #monitoringsucks. Venting lead to discussions lead to the realization that the strong emotional response was driven by a need for better tools — we hate monitoring because it is both important and hard to do well. Everyone complains about nagios, but it’s been the market leader for 10+ years because it works. The tools, and thus monitoring, would be better if we gave them some love. So the conversation migrated to#monitoringLove and discussions of how to make things better. This period saw tools like graphite and statsd emerge into popularity. Possibly as a joke, Jason Dixon floated the idea of a conference and then willed it into awesome existence.
Why Me?
Because you care about the tools that you work with. You’re an artisan within your team and want to help improve the work environment for you and your peers. We’ve all heard that monitoring sucks, but you want to do something about it.
(From Monitorama I, 2013)
monitoring love
#MonitoringLove fit nicely with ideas behind the DevOps movement: improve the dev and ops communities, get them to work together, and get ops to write and share code. As Carlo (@lolcatstevens) of DevOpsLA said during his SCaLE talk, “Ops, you want Dev respect? Ship some code!”(paraphrased). The groundswell of support for tools (and the operators of those tools) was unexpected and encouraging, also spawning #hugops conversations reminding us to thank our ops folks for their tireless struggle to keep everything running. P.S. Hey DM Ops, thanks for everything!!!
The first monitorama was two days, with the second day devoted to a hackathon. A hundred people hunkered down to listen to talks, bond, converse and write code. It was fun (and intimidating) to have so many project authors attending, reminding us that they’re normal people albeit ones who share their talent and passion. Graphite real-time graphs were a big theme, as were nagios replacements Riemann and Sensu, and log tools like logstashkibana and elastic search.
A year later, we could see how much new ground was covered. Talks assumed you’d already started using logstash and elastic search and graphite and tried a bevy of graphite-front-end replacements. Our organizer, Jason Dixon, did a fabulous job of maintaining the small conference energy and passion even as we increased to ~300 people and expanded to 3 days. We kept a single-track approach, meaning you could see everything and feel included. Inclusion, cooperation and encouragement were all specifically emphasized. The hackathon was less pronounced, merging with tutorials on the third day. After each tutorial, I was struck with project ideas and possible contributions. I really enjoyed the tutorial on flapjack and I jumped in to fixing some tickets and install issues, reminding myself how to use ruby along the way. Hands-on fiddling was encouraged through all the conference, reminding us not to be hidden observers.
I collected the slide decks for most all of the presentations this year and collated them into a post for you. Most of them are also available as video uploaded to Vimeo. The audio and video quality are better than I expected, pretty awesome actually.
The Grafana tutorial (video) was particularly well received. Torkel Ödegaard flew in from Sweden to show his new project, built from the core of the excellent Kibana (elastic search) project. Grafana is an open source metrics dashboard and graph editor for Graphite and InfluxDB — use it to build beautiful graphs for graphite data. And there is a live demo to play with while you watch the tutorial. To paraphrase his intro: “I used graphite! I loved it! None of my teammates wanted to make graphs in the terrible UI, sadness. I used Kibana, I F’in’ love kibana. The Graphite UI is terrible. the Kibana UI is awesome. So I started hacking.” And to paraphrase the audience “HOLY CRAP! I WANT THAT NOW! HOW CAN WE GET YOU TO WORK ON IT FULL-TIME?! LOVE!” And there was much rejoicing #hugops! Torkel also plays a fine game of table tennis, we met in the first round of the Ping Pong tourney, battle of the ‘gaaaaards. Spoiler: I made it to the semi-finals.

Meeting new peers at a conference is a wonderful boost of energy and drive. I highly recommend it, even if it’s a bit far afield of what you work on, the new skills will help you see new solutions to your current problems.
Top hits I’d recommend from the conference:

  • All the tutorials: GrafanaDashingFlapjackKibana and InfluxDB.
  • 17th century shipbuilding and your failed software project – hilarious lightning talk of the “WAT” variety — warning, some “adult” language.
  • Keynote by Adrian “netflix guy” Cockroft.
  • Computers are a Sadness, I am the cure, insightfully funny look into software and ops by the incomparable James Mickens. Calorie free, but entertaining. Funnest and funniest talk you’ll see at a tech talk this year.
  • Cost and Complexity of Reactive Monitoring Wonderful talk on how and why to monitor. Baker is a nice fella, I’m happy to have made friends with him.
  • Lifecycle of an outage, Scott Sanders’ talk about how Github handles outages. Great look at their internal workflow and tools during emergencies.
  • Car Alarms vs Smoke Alarms a talk about Sensitivity and Specificity as imported from medical probability conversations — how to calculate the positive predictive value of a test. A usefuldiagram to view while watching.
  • Find your favorite by browsing All Videos.
  • @Fun_Cuddles Audit All The Things talk showed some seriously hard core talk on security logging, including some sweet hooks to use the linux audit system. “We found no evidence that any customer data was accessed, changed or lost,” generally means “We have no idea!”. Jen is awesome!
  • pretty much ALL OF THE VIDEOS!
Thanks for reading. Please let me know if you watch and enjoy any of these talks, I’d love to discuss them with you.

Wednesday, June 11, 2014

Test-Driven Development with Python.

Harry Percival (@hjwp / obeythetestinggoat on gmail) has written a new book on TDD with python: "Test-Driven Development with Python." An early release of the book is available for free reading on chimera.oreilly.com.

Last week he led a webcast, "Outside-in TDD and Unit Test Isolation with Python, Django and Selenium." It was almost 2 hours, lots of Good stuff. He started by explaining traditional (inside-out) TDD and then contrasted to outside-in, all in the context of a webapp.

O'Reilly dropped a 50% discount code during the webcast, not sure how long it will last: "WCYAZ".

I "watched" the webcast live, but was on my phone which only provided the audio stream and not the slides. I'm looking forward to watching the archive and reading the book.

LINKS:

Monday, May 12, 2014

Monitorama Slides 2014

Videos will be posted in the monitorama channel on vimeo: http://vimeo.com/monitorama.

Until then, enjoy this collection of slidedecks and twitter handles.

Day 1:

Please, no More Minutes, Milliseconds, Monoliths... or Monitoring Tools!

Computers are a Sadness, I am the Cure

  • James Mickens
  • No slides posted. No twitter handle.
  • lots of photos.
  • "Say 'Word Count' one more time"

Simple math to get some signal out of your noisy sea of data

The Care and Feeding of Monitoring

Car Alarms and Smoke Alarms

Metrics 2.0

Our Most Wicked Problem

StatsG at New York Times

The cost and complexity of reactive monitoring

  • Chris Baker
  • @datumrich
  • slides: none yet

From Zero To Visibility

Day 2:

"Auditing all the things": The future of smarter monitoring and detection

Is There An Echo In Here?: Applying Audio DSP algorithms to monitoring

A Melange of Methods for Manipulating Monitored Data

The Final Crontab

This One Weird Time-Series Math Trick

The Lifecycle of an Outage

A whirlwind tour of Etsy's monitoring stack

Wiff: The Wayfair Network Sniffer

Web performance observability

Day 2: Lightning Talks

ServerSpec and Sensu

Monitoring for Distributed Operational Responsibility

Postgres Performance Monitoring

  • Larry Price
  • @laprice

Accidentally catching a hacker with monitoring

  • Xiao Yu
  • @HypertextRanch
  • "We need to teach developers exactly enough stats and math to solve their biggest problems."

Chess - a reflection of life

  • Narenda Vikram D
  • @contactdnv

17th Century Shipbuilding and Your Failed Software Project

Day 3 – Hacking and Tutorials

Kibana Workshop

Flapjack Workshop

Dashing Workshop

InfluxDB Workshop

Grafana Workshop

Wrap-up notes and blog posts:

Friday, November 15, 2013

Kinesis Advantage: mapping the Macintosh Power key

TL;DR

Press = and Scroll Lock together while in a pc master mode to make Scroll Lock the Macintosh Power Key.

Motivation

I normally use non-windows pc mode (=p) for my kinesis. Now that I'm on a mac I need a Command key, so I switched to windows pc mode. Windows pc mode only changes one thumb key relative to non-pc mode, the right alt becomes a Command/Windows key. Mac mode remaps all the alt and control locations and produces two Command/Windows keys.

I rarely need the power button, so I hadn't bothered to figure this out. But now I want to be able to suspend/power my laptop without opening it and waiting on graphic layout as the system switches to multi-monitor mode.

How-to

It's a simple matter to pull the Power Key binding from mac (=m) mode into any of the other modes. Press = "Scroll Lock" to copy the binding from default into current. The tricky part was finding the original binding.

  • Kinesis advantage supports three master settings: macintosh (=m), non-windows pc(=p) and windows pc(=w).
  • macintosh mode (= m) is the default mode.
  • macintosh mode maps Scroll Lock to Power Key
  • Any key that is mapped by a master setting can be individually remapped using the = key in the number row (top left, above Tab).
Windows PC layout:

Kinesis USB Advantage manual

Saturday, June 8, 2013

Hack day with Kenny: Fey::ORM, testing and screen. [lost draft from 1/12/10]

After sleeping through the LILAX users group meeting (sorry guys), I rolled up to Kenny's (Kenny Flegal), where he had invited me for a day of coding and authentic Salvadorian food. Win Win!

I showed him briefly the topic of my upcoming Monger's presentation, but mostly we looked at his current project. He is forking a GPL licensede project, to recreate part of the functionality and extend it in a different direction. Along the way he's rewriting the app layer in perl from command line php scripts.

We discussed the various clauses of the Gnu Affero GPL with regards to the hosting of the project during the initial revs. Can he have a public repository before he has finished changing all references to the old name to a new name and adding "prominent notices stating that you modified it, and giving a relevant date" as per Section 5, paragraph a? We decided that he probably could, but that it'd be easier to start with a private repo and not publish until that part is done. That seems sub-optimal from a "getting the source to the people" mindset, but it is more optimal in the "protect the good name of the original project and publishers."

Along with switching from php to perl, he's pulling out the hard coded sql from the scripts and moving to an ORM. He's picked Dave Rolsky's impressive Fey ORM. This project has a ridiculously complex set of schemas, with inconsistent table names and not explicit foreign key constraints. As such, it is extra work to get the fey schema situated.

Kenny started to give me a run through of some of the code, but it was awkward with both of us on laptops to see the code conveniently. I made him stop and set up a screen session for sharing, as described in my previous post on screen. This was more difficult than I expected, with the problem eventually being that ubuntu 9.4 and beyond has moved /usr/bin/screen to /usr/bin/screen.real and made screen a shell wrapper. The screen multiuser ACL system requires that the screen binary be setuid (chmod +s). With this setup we needed to make screen.real setuid. That took a while to notice.

Once we had a shared session open, it was much easier for him to give me a guided tour of the codebase and database/sql setup. Once that was clear it was time to get some code started. He showed me some of the Fey::ORM model code and how he was migrating over the individual sql statements to the ORM. He had been plugging away on the model code for a while, starting by creating a comment for every line of sql in the application including the file and line of the caller.

The next step was clear, we needed some tests. We set to work getting an initial test of the model code. First we installed Fey::ORM::Mock as a mock layer. This works at a higher level than a standard DBD::Mock interface to allow better testing of the Fey::ORM features. The test didn't pass at first due to missing data in the mock object, so we grabbed a list of the fields that mapped to DB fields and started adding values to pass constraint failures on the data. Once we had a minimal set of data then we started to see problems with the ORM schema description. The lack of well defined foreign key constraints meant we needed to explicitly define that structure for the ORM. More boilerplate code into the model. We repeated this test-update-repeat cycle a few more times adding more data linkage descriptions.

I took a brief break from our pairing and jumped to a different screen to install some goodies. I grabbed a copy of the configuration files from the December la.pm.org talk and started updating his config. He didn't have a .vimrc, .vim or .perltidyrc on this brand new dev box, so I pulled those in from the repo. I showed him how much time using ":make" in vim could slice off his build/test cycle, and he was super excited. (ok, not till the third or fourth try but he eventually got the hang of it).

To get around some issues in code placement, I modified the .vimrc and .vim/ftplugin/compiler code to add -MFindBin::libs to the calls to perl -c and prove. This allowed the parent libs/ directory to be found for these non-installed modules. This is a bit of a hack and I'll get it removed as we move closer to an initial release and pick a packaging tool, possibly Dist::Zilla.

An open question is the speed of Fey::ORM. It takes a big startup hit while building the models from the schema and interacting with the database. This is supposed to lead to a big speed gain during runtime from aggressive caching of that information. All I know for certain is that the compile-run-test cycle was really slow. This is my first time using Fey so I don't know how this plays out normally. It could just be that the number of crosslinked tables in the db config were causing additional slowdowns.

By this point we had already had two delicious meals of El Salvador cuisine and it was approaching midnight. The first meal was home cooked fried (skinless) chicken for lunch and the second was papoosas at a local, excellent place in Van Nuys. I was all coded out, which made for a perfect transition to the party at Andy Bandit's that night, conveniently just 6 miles from Kennys.

All in all, a fine Saturday.

Thursday, May 23, 2013

Remap XBMC remote control power off

We can block a remote control trigger in XBMC by mapping to code NOOP (No Operation).

I've put the following into my keyboard.xml file ($HOME/.xbmc/userdata/keymaps/keyboard.xml) to disable the "power" button on my MCE remote control. Prior to this change, when I'd change "Activity" modes on my remote control (Harmony 650) it would send a power toggle that would cause XBMC to exit.

Previously I had a hack of binding the button to a different code. Today I learned about the NOOP binding from the friendly team at the XBMC booth at SCALE11x today. Thanks!

<keymap>
    <global>
        <remote>
            <power>NOOP</power>
        </remote>
    </global>
</keymap>

Code Review (part 1)

I love code review

What is code review? This wikipedia quote sounds ok, but who could love anything that includes "formal process" and "scrutinized"? Sounds like a lot of work, right? What's the upside?

Code review is systematic examination (often as peer review) of computer source code intended to find and fix mistakes
http://en.wikipedia.org/wiki/Code_review
A code review (sometimes called a program inspection) is a formal process where a software developer presents the code he or she has written to other software engineers who are familiar with the project. The code is scrutinized carefully to identify potential bugs, design problems, non-compliance with project standards, inconsistencies, and any other problems in the code.
http://sea.ucar.edu/best-practices/code-review

Code review allows developers to collaborate and improve code by reading it early and catching bugs during development. The earlier bugs are caught, the less impact and expense they cause. Code review is a lot more fun before the changes are live than retroactively trying to figure out what change broke everything in production.

I remember code reviews at my first software company. Someone must have heard they were a good idea, so we had to do code reviews of new features. We waited until the feature was done, then printed out all the code and took a few engineers into a room. We'd sit there for a few hours looking over the printouts before making a few token suggestions and calling it quits. We shared some small insights and caught a few bugs, but overall this heavy process was unstructured, late, disorganized and ineffective. We had the right motivations, but we were looking at code too late in too big of a chunk.

At the other end of the scale is the ad-hoc system of emailing around some diffs or code refs and asking for input. Here the lack of formal process is a pain -- emailing diffs around? Another process flowing through (stalling in) my mailbox? Where do I send my comments, how do I archive the results?

Somewhere in the happy middle are tools for "light weight code review." These tools take a diff and present it in a web interface providing the ability to view the diffs and make comments and enforce some sort of workflow. Gerrit (inspired by Rietveld inspired by Mondrian), Review-Board and BarKeep are some of the open source options, github reviews are free and pay software is available from SmartBear (Code Collaborator), Atlassian (Crucible) among others. These systems all make different trade-offs: pre-vs-post commit, forced-vs-optional reviews, VCS agnostic-vs-integrated, inline vs side-by-side diffs.

At work we've been using Gerrit for two years now after switching from Rietveld when we migrated from SVN to git. Gerrit is very opinionated: it is for mandatory, pre-commit reviews and only supports git. Gerrit integrates nicely with Jenkins continuous integration server for running unit-tests before the review. We originally picked Gerrit at [undisclosed startup] and managed to integrate it into Demand after we were acquired.

For open source projects, I'm happy with github pull-request discussions (I still wish I could see side-by-side diffs! I have this huge monitor for a reason) and couldn't see enforcing the gerrit model (even though the android project does) without discouraging drive-by patches from random developers. But at work, I want the small dollop of process that gerrit provides.

I've been super happy with gerrit and can't wait to tell you more about it in "Code Review, Part II. Gerrit FTW".

Tuesday, May 14, 2013

Mojo::UserAgent dom parsing is FUN!

I'm about to roll out a new feature at work. I've added new data to the "schema" behind some of our pages and another team has implemented the template changes.

Now, How do I test that feature appears on the page? And by "on the page" I mean embedded attributes into a javascript call on the page.

I used Mojo::UserAgent and it's built in dom handling to make this easy-peasy! Load the page, look for script tags, find the one calling our Magic.Marker function and then use a regex to pull the args. Wrap it all up in Test::Most and throw some data into _DATA_!

ps. Writing his post took considerably longer than writing this test.

#!/usr/bin/perl
use v5.12;

use Mojo::UserAgent;
use List::Util qw(first);
use Test::Most;

my $ua = Mojo::UserAgent->new();
sub x_param_from_url
{
    # load URL and find the first script block that 
    # contains Magic.Marker.  Parse Magic.Marker args 
    # for items like "{ x: value }" and return all the 
    # values found.
    my $url     = shift;
    my @scripts = $ua->get($url)->res->dom->find('script')->each();
    my $script  = first { $_->all_text =~ /Magic\.Marker/ } @scripts;
    return unless $script;
    
    my $text = $script->all_text;
    my @matches = ( $text =~ m/\{ \s* x \s* : \s* (\S+) \s* \}/gimx );
    return @matches;
}

foreach my $data (<DATA>)
{
    chomp $data;
    my ( $url, @expected ) = split( /\s/, $data );
    # redirect to the internal-staging server
    my $url =~ s/www.example.com/internal-staging.example.com/;
    my @output = x_param_from_url($url);
    eq_or_diff( \@output, \@expected, "$url")
}
done_testing

__DATA__
www.example.com/how-to_123  '2' '3'
www.example.com/why-not-1234 '1' '2'
www.example.com/why-not-777 '3'
This produces a lovely TAP output for the site:
% ./verify.pl

not ok 1 - internal-staging.example.com/how-to_123
#   Failed test 'internal-staging.example.com/how-to_123'
#   at ./verify.pl line 35.
# +----+---------+----+----------+
# | Elt|Got      | Elt|Expected  |
# +----+---------+----+----------+
# |   0|'\'2\''  |   0|'\'2\''   |
# |    |         *   1|'\'3\''   *
# +----+---------+----+----------+
ok 2 - internal-staging.example.com/how-to_123
ok 3 - internal-staging.example.com/why-not-777

Monday, January 21, 2013

git branch cleanup -- show commits that need to merge

I'm cleaning up my feature branches. I want to look at any dangling commits that only exist in the branches.

First Pass: Remove branches that have already been merged into master

These branches can be detected via git branch --merged, pipe this to git branch -d and delete them.
% git branch|wc -l
93

% git branch --merged|wc -l
24

# not quite right, want to skip current branch
% git branch --no-color --merged | grep -v '\*' | xargs -n 1 git branch -d 
...
Deleted branch deleteme (was 4358c15).

% git branch --merged
* master

% git branch|wc -l
70

Second Pass: Find dangling commits

Look at the dangling commits in the remaining branches and see if they are important. If we want the commits, we'll merge them into master. If not, we'll force delete the branch with git branch -D.

Use git merge-base to find the most recent ancestor between this branch and master.

# pick a branch
% git checkout makefile_dirs
% git merge-base --all HEAD master
9906334e464c6e93103b786672b14c31c27f8df8

#The trailing ".." is important, as this specifies a range of commits
% git log --pretty=oneline 
9906334e464c6e93103b786672b14c31c27f8df8..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile

#We can merge those latter two steps into:
% git log --pretty=oneline $(git merge-base --all HEAD master)..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile
Including the prior commit to the log will help determine how old this branch is. We'll add a ^ to look at the parent of the branch commit.
% git log --pretty=oneline $(git merge-base --all HEAD master)^..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile
9906334e464c6e93103b786672b14c31c27f8df8 passes site_id and rad_id through to ou

Saturday, December 1, 2012

Perl Advent Calendars: 2012

Move on over, Movember! Happy December!
Make room for Perl Advent Season!

We are blessed with many perl themed advent calendars. I'm so excited to have so many squares to open! Now, if I could just get my article(s?) for the perl advent calendar finished(started!!!)

Yearning for something more active than just reading an article each day? Make a pull request to your favorite OSS projects with 24 pull requests, and brighten immeasurably the day of your favorite developer.

Perl Advent Calendars

Perl Advent
http://perladvent.org/2012/
(Formerly the perladvent.pm.org calendar)
Perl Dancer -- the dancer mini web framework
http://advent.perldancer.org/2012/
OX -- a web anti-framework
first time advent calendar!
http://iinteractive.github.com/OX/advent/
Perl 6!
http://perl6advent.wordpress.com/
For the adventurous: Japanese Perl Advent Calendars, 2 different tracks:
http://perl-users.jp/articles/advent-calendar/2012/
http://perl-users.jp/articles/advent-calendar/2012/hacker/ Hacker Track
http://perl-users.jp/articles/advent-calendar/2012/casual/ Casual Track

Retired Advent Calendars:

Catalyst Advent Calendar -- The Catalyst Web Framework
The catalyst advent calendar has been retired, replaced by a monthly series. The past 7 years of calendars are available.
http://www.catalystframework.org/calendar/
Ricard's advent Calendar -- a month of RJBS. Not updated since 2010, but he did give us a Hanukkah calendar last year!
http://xn--8dbbfrx.rjbs.manxome.org/2011/ Hanukkah 2011
http://advent.rjbs.manxome.org/2010/
Plack advent calendar: Not updated since 2009.
http://advent.plackperl.org/

A bonus list

for the sysadmin and web geeks in your life:
SysAdvent - The Sysadmin Advent Calendar.
http://sysadvent.blogspot.com/
24 ways - Advent Calendar for Web Geeks.
http://24ways.org/

Monday, November 19, 2012

Fingerworks Touchstream on Mac OSX 10.7.4

Woohoo! "New" Fingerworks Touchstream keyboard arrived today. This is my first Touchstream. It's from before Fingerworks was bought by apple and shuttered, circa 2005.

It works out of the box, but for the full experience requires running the configuration software to change the chord/multitouch bindings. Getting this running on modern hardware is challenging.

  • the company website is down.
  • The installer is for powerpc only (no longer supported by apple).
  • the application itself is a java app that requires an old version of java (no longer supported by apple).
  • the java app uses opensource jusb, which has been mostly abandoned.
The awesome people at the fingerfans message board have done a lot to keep these beloved pieces of future-tech up and running. They have a copy of the original website, the original help forums, original software, third party software, manuals, pds, and instructions for repair. I've seen posts on replacing the fpga, which is just NUTS-slash-Awesomesauce.

My steps:

  • install 1.5.3 software.
    Download this custom installer for linux and ran it on my mac: 1.5.3 software
    wget http://fingerfans.dreamhosters.com/download/setupfw153_noJava.bin
    sh setupfw153_noJava.bin
  • update jusb
    download a patched jusb from github, build and install into /Applications/FingerWorks/
    git clone https://github.com/DanThiffault/jusb.git
    cd jusb
    make
    cp -r libjusbMacOSX.jnilib* /Applications/FingerWorks/lib/jusb/
    cp jusb.jar /Applications/FingerWorks/lib/jusb/
  • install an alternative run script mtu_run.sh into /Applications/Fingerworks
    wget -O /Applications/Fingerworks/mtu_run.sh https://raw.github.com/gist/1096642/9004f21e6697fa080bb1ddde95f8a2a9d2bccae5/mtu_run.sh
    chmod a+rx /Applications/Fingerworks/mtu_run.sh
Now launch from the command line:
/Applications/Fingerworks/mtu_run.sh

Success!

The multitouch tool aka fingerworks.firmup.UtilityLauncher launched and detected my "TouchStream ST/LP ver 1.6". [RUN Diagnostics...] reported:
All sensor array tests PASSED!
Loaded 1243 Key/Gesture Mappings SUCCESFULLY
    Keymatrix#: 34
Testing Complete.

Not so fast: Can't write to device

Doh. Seems I can run the diagnostics, but I can't push a new configuration onto the device. That's a major bummer. I'll have to look into the java errors and see what can be done.
Starting transfer...
        Writing MTS_config Binary to:  /Users/andrew/Documents/MyGestures/custom4f0040stealth34.byt
        Sending configuration to Gesture Processor...
          (Sending DeleteMsg w/ minfirmver 326, minsurfver 7, keymatrixver 34
          (Sending user options)
          (Sending 16 macro definitions)
          (Sending 100 tapareas)
          (Sending 0 switches)
          (sending -1 hand)
          (sending 1 hand)
          (sending 2 hand)
          (sending 0 hand)
          (Sent 642 total events!)
...finished merging /Users/andrew/Documents/MyGestures/custom4f0040stealth34.byt
S8 Terminated with FLASH image CRC32: 0x6f1f72da
new  idDevice: 0x160, idProduct: 0x90b,  idVendor: 0xe97
USB DFU suffix appended to: /Users/andrew/Documents/MyGestures/custom4f0040stealth34.U.byt
        MTS_config Binary /Users/andrew/Documents/MyGestures/custom4f0040stealth34.U.byt ready for transfer! 
        existing  idDevice: 0x160        idProduct: 0x90b        idVendor: 0xe97
Java computed firmware image CRC32 0x6f1f72da on 32870 bytes
Exception in thread "Thread-8" java.lang.IllegalAccessError: tried to access class usb.linux.DeviceImpl from class fingerworks.firmup.USBupgrader
        at fingerworks.firmup.USBupgrader.a(Unknown Source)
        at fingerworks.firmup.USBupgrader.downloadFirmwareFile(Unknown Source)
        at fingerworks.firmup.USBupgrader.send2GestureProcessor(Unknown Source)
        at fingerworks.firmup.a.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:680)
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextGetCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextSetBaseCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextGetCTM: invalid context 0x0
Nov 19 16:57:06 femto.corp.dm.local java[59263] : CGContextSetBaseCTM: invalid context 0x0

References

Wednesday, October 17, 2012

GNU screen clipboard to X11 clipboard integration

Now that we have Clipboard cut-and-paste working in remote vim, let's get GNU screen to interact with the X Clipboard. This is useful when copying a large scrollback buffer into a browser app or email client. I can now copy from my screen session to my local OSX client and back!
  1. Install xsel
    sudo aptitude install xsel
  2. Add to .screenrc:
    # read and write screen clipboard to X clipboard.
    bind > eval writebuf "exec sh -c 'xsel -bi </tmp/screen-exchange'"
    bind < eval "exec sh -c 'xsel -bo >/tmp/screen-exchange'" readbuf
    
  3. ...
  4. profit

How it works

GNU screen has a built-in cut-and-paste metaphor. We leverage two new keybindings C-A > and C-A < to exchange screen data with the X11 Clipboard. The OSX X11 app then pushes the clipboard changes into the local OSX clipboard.

C-A > dumps the current screen paste buffer to /tmp/screen-exchange, and then uses xsel to push the contents of /tmp/screen-exchange to the X11 Clipboard.

C-A < uses xsel to pull the X11 Clipboard contents to /tmp/screen-exchange and then populates the screen paste buffer with the contents of /tmp/screen-exchange. At this point, the normal C-A ] will paste the data.

xsel needs a valid DISPLAY configured to interact with X. If using a remote a screen session, you'll need to forward your X connection and make sure your DISPLAY var is valid inside of your screen session. For more details, see my OSX Remote VIM Clipboard post.

See the commit to my screenrc in my config repository.

Update:

Remove the -n flag from xsel -bi.

Tuesday, September 25, 2012

Lambda Architecture

aka "Runaway complexity in Big Data, and a plan to stop it."

Nathan Marz's talk tonight at Strangeloop coined the term "Lambda Architecture" to describe a hybrid batch+realtime data engine built on functions running over immutable data. This builds on themes from his "Big Data" book.

The pieces all exist, but there's no simple packaging over all of them : distributed raw data store, map-reduce for batch (hadoop/mapr with pig, hive, etc) to precompute views that are stored in fast-read, map-reduce-writable DBs (voldemort, elephantdb), storm for streams, high throughput/small volume db for the storm output (cassandra, risk, hbase), and a custom query merge on top of both. There's no pre-made piece for the custom query merge, possibly storm works there.

Exciting and awesome!

slides and a HackerNews discussion

Monday, September 17, 2012

OSX + remote vim clipboard sync

IT WORKS! <sfx: evil genius laugh />

A few pieces are required to get smooth integration of the local OSX clipboard with the remote vim clipboard. I'll walk you through the configurations and you'll be cutting-and-pasting like it's no big thang. Pasting large blocks of text works so much better via "+p than via system paste into an xterm vim window.

Puzzle Pieces:

  • OSX clipboard syncing with the local X11
  • X11 forwarding in ssh
  • vim compiled with with +xterm_clipboard setting.
  • optional: configure vim to use xterm clipboard by default
  • optional: a better OSX terminal: iTerm2
  • optional: screen + DISPLAY var

OSX clipboard sync with X11

  1. launch X11 (Applications::Utilities::X11)
  2. open pasteboard preferences (X11::preferences::Pasteboard)
  3. check:
    1. enable syncing,
    2. update Pasteboard when CLIPBOARD changes,
    3. update CLIPBOARD when Pasteboard changes
  4. you may want to quit X11 now to ensure the new settings are saved.
  5. Note: don't set update Pasteboard when CLIPBOARD changes, as it produces a very strange paste behavior where full lines will paste as relative.

ssh X-forwarding:

You can enable this on the fly via the -X flag to ssh or by adding "ForwardX11 yes" to your .ssh/config file. ForwardX11 can be set globally or per-host.
Example .ssh/config entry for my vm:
host vm53
  user vm53
  ForwardX11 yes
The forwarding provided via ForwardXll is seen as untrusted by the X Security extension. Untrusted clients have several limitations: they can't send synthetic events or read data from other windows and are time limited.

If you really trust the remote host you can use Trusted forwarding. This is enabled with the -Y flag to ssh or the "ForwardX11Trusted true" option in .ssh/config. I've switched to using trusted connections when connecting to my local VM since my connections are open for days/weeks at a time.

host vm53
  user vm53
  ForwardX11Trusted true

Vim with +xterm_clipboard

Check the capabilities of your vim via vim --version, you're looking for +xterm_clipboard.
vm53% vim --version | grep xterm_clipboard
+xsmp_interact +xterm_clipboard -xterm_save
If your version of vim doesn't have xterm_clipboard, try another package. I'm using vim-nox for my debian/ubuntu machines.

At this point, you should be able to cut and paste using the + buffer to interact with the system clipboard. Paste with "+p and copy/yank with "+y. Under X the clipboard is in the "+" buffer, under windows it is the "*" buffer. In OSX gvim, "+" and "*" appear to be the same buffer?

configure vim to use xterm clipboard by default

Remembering to use the + buffer is extra work. We can make this automatic by setting the clipboard option in vim. set clipboard=unnamedplus (added in Vim 7.3.074) to use the system clipboard when using the default (unnamed) buffer. At this point, p will paste from the system clipboard. AMAZING!

iTerm2

You should ditch the default Terminal app that comes with OSX and use iTerm2 instead. You can have it do "copy on select," just as you'd expect from an Xterm, and it all ties into the work we did above. It also has some other interesting features, like native tmux support.

DISPLAY env with screen

When reconnecting to your remote screen session, you may end up with the DISPLAY variable out-of-sync. By default, I get DISPLAY=localhost:10.0 when I connect to my VM. But each connection opens a new back channel on a new port :11.0, :12.0, etc. You may need to update the value of DISPLAY inside your screen session, via export DISPLAY=localhost:10.0 with the correct DISPLAY value for this ssh connection -- check env DISPLAY outside of the screen session to get the value.

P.S.

I had some troubles testing until I realized I was expecting select-to-copy behavior in Chrome Browser under OSX. Ha! I'm glad I finally spent the 20 minutes to get all these pieces aligned.

Update

Updated to show that X users want the + buffer rather than the * buffer, after reading up on the original patch.

Update

Updated with -Y/X11ForwardTrusted information.
Updated to warn against "update pasteboard" option in OSX X11 app

Wednesday, September 12, 2012

Modify submit-type for Gerrit project via meta/config

The Submit-type of a gerrit code review project can not be changed in the UI after creation. It can be modified via the hidden meta/config branch. Any setting available to create-project can be edited this way.

Project information is not stored in the gerrit database. The information is stored directly in the git repository in a branch named 'meta/config', in two files 'project.config' and 'groups'. The values from these files are cached in the'project-list' and 'projects' caches.

Steps to make a change:

  1. set read and push permissions on refs/meta/config
  2. check out the branch,
  3. change the files,
  4. push the repo back,
  5. clear the cache.

Check out the branch:

% git fetch origin refs/meta/config:refs/remotes/origin/meta/config
% git checkout meta/config

Push back the changes:

#directly:
% git push origin meta/config:meta/config
#via review:
% git push origin meta/config:refs/for/refs/meta/config

Flush the caches:

% ssh gerrit gerrit flush-caches --cache project_list
% ssh gerrit gerrit flush-caches --cache projects

project.config

[access "refs/*"]
        owner = group MYgroup
[receive]
        requireChangeId = true
[submit]
        mergeContent = true
        action = merge always

groups

# UUID                                          Group Name
# eca2c52d733e5740a01747e71f018dcfdeadbeef      MYgroup
I found the meta/config mentioned in some posts (post post) in the repo-discuss newsgroup.

Friday, August 31, 2012

Unexpected error report from cpantesters!?

After pushing my release of App::PM::Website, I received a cpantesters failure report. Opening it revealed an error from File::Path not exporting make_path. What-the-what? Of course File::Path exports make_path, that's like all it does!
# Failed test 'no eval errors'
# at t/app-pm-website.t line 12.
# got: '"make_path" is not exported by the File::Path module

Further inspection of the report shows that this test box is using File::Path v2.04 with perl 5.10. You remember: the File::Path version released in 2007.

PREREQUISITES:
Here is a list of prerequisites you specified and versions we
managed to load:

Module Name Have Want
App::Cmd 0.318 0
App::Cmd::Command 0.318 0
App::Cmd::Tester 0.318 0
Config::YAML 1.42 0
Data::Dumper 2.121_14 0
Date::Parse 2.30 0
DateTime 0.76 0
DateTime::Format::Strptime 1.52 0
File::Path 2.04 0
File::Spec 3.33 0
...

Ok, so this was me being lazy in importing my File::Path requirement, as I clearly needed 2.07 or greater (from 2008). wow. Patched version of App::PM::Website is now released and pushed to CPAN.

I have to wonder if this is a box set to be purposefully obtuse and install the lowest version it could find that matched my requirements, just to catch this sort of thing? If so, my hat is off to you fine perler!

Wednesday, August 29, 2012

Hadoop::Streaming v0.122420 released

Hadoop::Streaming perl module v0.122420 released to CPAN today.

Yanick was kind enough to send me a patch (Pull Request #2) to clean up the formatting of the POD documentation. Thank you for the patch Yanick! I love patches and pull requests! Woot!

You know what else I love? Dist::Zilla! Thanks Ricardo! Here are the complete steps to release a new version and push it to CPAN after merging in the pull request:
git pull && dzil release

Steps accomplished by dzil release:

  1. generate new version number
  2. podweaver magic to generate structured POD
  3. boilerplate: create LICENSE, MANIFEST, META.json, META.yml, Makefile, etc.
  4. create tar file of distribution
  5. extract tar file and run all tests
  6. verify no dirty files in git
  7. update Changes file to include the new version number.
  8. git commit Changes file
  9. upload tar file to cpan
  10. git tag with version number
  11. git push to origin