Friday, November 15, 2013

Kinesis Advantage: mapping the Macintosh Power key

TL;DR

Press = and Scroll Lock together while in a pc master mode to make Scroll Lock the Macintosh Power Key.

Motivation

I normally use non-windows pc mode (=p) for my kinesis. Now that I'm on a mac I need a Command key, so I switched to windows pc mode. Windows pc mode only changes one thumb key relative to non-pc mode, the right alt becomes a Command/Windows key. Mac mode remaps all the alt and control locations and produces two Command/Windows keys.

I rarely need the power button, so I hadn't bothered to figure this out. But now I want to be able to suspend/power my laptop without opening it and waiting on graphic layout as the system switches to multi-monitor mode.

How-to

It's a simple matter to pull the Power Key binding from mac (=m) mode into any of the other modes. Press = "Scroll Lock" to copy the binding from default into current. The tricky part was finding the original binding.

  • Kinesis advantage supports three master settings: macintosh (=m), non-windows pc(=p) and windows pc(=w).
  • macintosh mode (= m) is the default mode.
  • macintosh mode maps Scroll Lock to Power Key
  • Any key that is mapped by a master setting can be individually remapped using the = key in the number row (top left, above Tab).
Windows PC layout:

Kinesis USB Advantage manual

Saturday, June 8, 2013

Hack day with Kenny: Fey::ORM, testing and screen. [lost draft from 1/12/10]

After sleeping through the LILAX users group meeting (sorry guys), I rolled up to Kenny's (Kenny Flegal), where he had invited me for a day of coding and authentic Salvadorian food. Win Win!

I showed him briefly the topic of my upcoming Monger's presentation, but mostly we looked at his current project. He is forking a GPL licensede project, to recreate part of the functionality and extend it in a different direction. Along the way he's rewriting the app layer in perl from command line php scripts.

We discussed the various clauses of the Gnu Affero GPL with regards to the hosting of the project during the initial revs. Can he have a public repository before he has finished changing all references to the old name to a new name and adding "prominent notices stating that you modified it, and giving a relevant date" as per Section 5, paragraph a? We decided that he probably could, but that it'd be easier to start with a private repo and not publish until that part is done. That seems sub-optimal from a "getting the source to the people" mindset, but it is more optimal in the "protect the good name of the original project and publishers."

Along with switching from php to perl, he's pulling out the hard coded sql from the scripts and moving to an ORM. He's picked Dave Rolsky's impressive Fey ORM. This project has a ridiculously complex set of schemas, with inconsistent table names and not explicit foreign key constraints. As such, it is extra work to get the fey schema situated.

Kenny started to give me a run through of some of the code, but it was awkward with both of us on laptops to see the code conveniently. I made him stop and set up a screen session for sharing, as described in my previous post on screen. This was more difficult than I expected, with the problem eventually being that ubuntu 9.4 and beyond has moved /usr/bin/screen to /usr/bin/screen.real and made screen a shell wrapper. The screen multiuser ACL system requires that the screen binary be setuid (chmod +s). With this setup we needed to make screen.real setuid. That took a while to notice.

Once we had a shared session open, it was much easier for him to give me a guided tour of the codebase and database/sql setup. Once that was clear it was time to get some code started. He showed me some of the Fey::ORM model code and how he was migrating over the individual sql statements to the ORM. He had been plugging away on the model code for a while, starting by creating a comment for every line of sql in the application including the file and line of the caller.

The next step was clear, we needed some tests. We set to work getting an initial test of the model code. First we installed Fey::ORM::Mock as a mock layer. This works at a higher level than a standard DBD::Mock interface to allow better testing of the Fey::ORM features. The test didn't pass at first due to missing data in the mock object, so we grabbed a list of the fields that mapped to DB fields and started adding values to pass constraint failures on the data. Once we had a minimal set of data then we started to see problems with the ORM schema description. The lack of well defined foreign key constraints meant we needed to explicitly define that structure for the ORM. More boilerplate code into the model. We repeated this test-update-repeat cycle a few more times adding more data linkage descriptions.

I took a brief break from our pairing and jumped to a different screen to install some goodies. I grabbed a copy of the configuration files from the December la.pm.org talk and started updating his config. He didn't have a .vimrc, .vim or .perltidyrc on this brand new dev box, so I pulled those in from the repo. I showed him how much time using ":make" in vim could slice off his build/test cycle, and he was super excited. (ok, not till the third or fourth try but he eventually got the hang of it).

To get around some issues in code placement, I modified the .vimrc and .vim/ftplugin/compiler code to add -MFindBin::libs to the calls to perl -c and prove. This allowed the parent libs/ directory to be found for these non-installed modules. This is a bit of a hack and I'll get it removed as we move closer to an initial release and pick a packaging tool, possibly Dist::Zilla.

An open question is the speed of Fey::ORM. It takes a big startup hit while building the models from the schema and interacting with the database. This is supposed to lead to a big speed gain during runtime from aggressive caching of that information. All I know for certain is that the compile-run-test cycle was really slow. This is my first time using Fey so I don't know how this plays out normally. It could just be that the number of crosslinked tables in the db config were causing additional slowdowns.

By this point we had already had two delicious meals of El Salvador cuisine and it was approaching midnight. The first meal was home cooked fried (skinless) chicken for lunch and the second was papoosas at a local, excellent place in Van Nuys. I was all coded out, which made for a perfect transition to the party at Andy Bandit's that night, conveniently just 6 miles from Kennys.

All in all, a fine Saturday.

Thursday, May 23, 2013

Remap XBMC remote control power off

We can block a remote control trigger in XBMC by mapping to code NOOP (No Operation).

I've put the following into my keyboard.xml file ($HOME/.xbmc/userdata/keymaps/keyboard.xml) to disable the "power" button on my MCE remote control. Prior to this change, when I'd change "Activity" modes on my remote control (Harmony 650) it would send a power toggle that would cause XBMC to exit.

Previously I had a hack of binding the button to a different code. Today I learned about the NOOP binding from the friendly team at the XBMC booth at SCALE11x today. Thanks!

<keymap>
    <global>
        <remote>
            <power>NOOP</power>
        </remote>
    </global>
</keymap>

Code Review (part 1)

I love code review

What is code review? This wikipedia quote sounds ok, but who could love anything that includes "formal process" and "scrutinized"? Sounds like a lot of work, right? What's the upside?

Code review is systematic examination (often as peer review) of computer source code intended to find and fix mistakes
http://en.wikipedia.org/wiki/Code_review
A code review (sometimes called a program inspection) is a formal process where a software developer presents the code he or she has written to other software engineers who are familiar with the project. The code is scrutinized carefully to identify potential bugs, design problems, non-compliance with project standards, inconsistencies, and any other problems in the code.
http://sea.ucar.edu/best-practices/code-review

Code review allows developers to collaborate and improve code by reading it early and catching bugs during development. The earlier bugs are caught, the less impact and expense they cause. Code review is a lot more fun before the changes are live than retroactively trying to figure out what change broke everything in production.

I remember code reviews at my first software company. Someone must have heard they were a good idea, so we had to do code reviews of new features. We waited until the feature was done, then printed out all the code and took a few engineers into a room. We'd sit there for a few hours looking over the printouts before making a few token suggestions and calling it quits. We shared some small insights and caught a few bugs, but overall this heavy process was unstructured, late, disorganized and ineffective. We had the right motivations, but we were looking at code too late in too big of a chunk.

At the other end of the scale is the ad-hoc system of emailing around some diffs or code refs and asking for input. Here the lack of formal process is a pain -- emailing diffs around? Another process flowing through (stalling in) my mailbox? Where do I send my comments, how do I archive the results?

Somewhere in the happy middle are tools for "light weight code review." These tools take a diff and present it in a web interface providing the ability to view the diffs and make comments and enforce some sort of workflow. Gerrit (inspired by Rietveld inspired by Mondrian), Review-Board and BarKeep are some of the open source options, github reviews are free and pay software is available from SmartBear (Code Collaborator), Atlassian (Crucible) among others. These systems all make different trade-offs: pre-vs-post commit, forced-vs-optional reviews, VCS agnostic-vs-integrated, inline vs side-by-side diffs.

At work we've been using Gerrit for two years now after switching from Rietveld when we migrated from SVN to git. Gerrit is very opinionated: it is for mandatory, pre-commit reviews and only supports git. Gerrit integrates nicely with Jenkins continuous integration server for running unit-tests before the review. We originally picked Gerrit at [undisclosed startup] and managed to integrate it into Demand after we were acquired.

For open source projects, I'm happy with github pull-request discussions (I still wish I could see side-by-side diffs! I have this huge monitor for a reason) and couldn't see enforcing the gerrit model (even though the android project does) without discouraging drive-by patches from random developers. But at work, I want the small dollop of process that gerrit provides.

I've been super happy with gerrit and can't wait to tell you more about it in "Code Review, Part II. Gerrit FTW".

Tuesday, May 14, 2013

Mojo::UserAgent dom parsing is FUN!

I'm about to roll out a new feature at work. I've added new data to the "schema" behind some of our pages and another team has implemented the template changes.

Now, How do I test that feature appears on the page? And by "on the page" I mean embedded attributes into a javascript call on the page.

I used Mojo::UserAgent and it's built in dom handling to make this easy-peasy! Load the page, look for script tags, find the one calling our Magic.Marker function and then use a regex to pull the args. Wrap it all up in Test::Most and throw some data into _DATA_!

ps. Writing his post took considerably longer than writing this test.

#!/usr/bin/perl
use v5.12;

use Mojo::UserAgent;
use List::Util qw(first);
use Test::Most;

my $ua = Mojo::UserAgent->new();
sub x_param_from_url
{
    # load URL and find the first script block that 
    # contains Magic.Marker.  Parse Magic.Marker args 
    # for items like "{ x: value }" and return all the 
    # values found.
    my $url     = shift;
    my @scripts = $ua->get($url)->res->dom->find('script')->each();
    my $script  = first { $_->all_text =~ /Magic\.Marker/ } @scripts;
    return unless $script;
    
    my $text = $script->all_text;
    my @matches = ( $text =~ m/\{ \s* x \s* : \s* (\S+) \s* \}/gimx );
    return @matches;
}

foreach my $data (<DATA>)
{
    chomp $data;
    my ( $url, @expected ) = split( /\s/, $data );
    # redirect to the internal-staging server
    my $url =~ s/www.example.com/internal-staging.example.com/;
    my @output = x_param_from_url($url);
    eq_or_diff( \@output, \@expected, "$url")
}
done_testing

__DATA__
www.example.com/how-to_123  '2' '3'
www.example.com/why-not-1234 '1' '2'
www.example.com/why-not-777 '3'
This produces a lovely TAP output for the site:
% ./verify.pl

not ok 1 - internal-staging.example.com/how-to_123
#   Failed test 'internal-staging.example.com/how-to_123'
#   at ./verify.pl line 35.
# +----+---------+----+----------+
# | Elt|Got      | Elt|Expected  |
# +----+---------+----+----------+
# |   0|'\'2\''  |   0|'\'2\''   |
# |    |         *   1|'\'3\''   *
# +----+---------+----+----------+
ok 2 - internal-staging.example.com/how-to_123
ok 3 - internal-staging.example.com/why-not-777

Monday, January 21, 2013

git branch cleanup -- show commits that need to merge

I'm cleaning up my feature branches. I want to look at any dangling commits that only exist in the branches.

First Pass: Remove branches that have already been merged into master

These branches can be detected via git branch --merged, pipe this to git branch -d and delete them.
% git branch|wc -l
93

% git branch --merged|wc -l
24

# not quite right, want to skip current branch
% git branch --no-color --merged | grep -v '\*' | xargs -n 1 git branch -d 
...
Deleted branch deleteme (was 4358c15).

% git branch --merged
* master

% git branch|wc -l
70

Second Pass: Find dangling commits

Look at the dangling commits in the remaining branches and see if they are important. If we want the commits, we'll merge them into master. If not, we'll force delete the branch with git branch -D.

Use git merge-base to find the most recent ancestor between this branch and master.

# pick a branch
% git checkout makefile_dirs
% git merge-base --all HEAD master
9906334e464c6e93103b786672b14c31c27f8df8

#The trailing ".." is important, as this specifies a range of commits
% git log --pretty=oneline 
9906334e464c6e93103b786672b14c31c27f8df8..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile

#We can merge those latter two steps into:
% git log --pretty=oneline $(git merge-base --all HEAD master)..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile
Including the prior commit to the log will help determine how old this branch is. We'll add a ^ to look at the parent of the branch commit.
% git log --pretty=oneline $(git merge-base --all HEAD master)^..

f48d27a93239558d5737652bc0e397d99d0f43fc improves directory creation in makefile
9906334e464c6e93103b786672b14c31c27f8df8 passes site_id and rad_id through to ou