Sunday, February 27, 2011

Scale9x: Take Advantage of Modern Perl

Chromatic's talk on Modern Perl at Scale9x is in about an hour -- 11:30am, Sun Feb 27, 2011. If you can't make it, at least check out the live stream.

I really shouldn't have gone to scale yesterday, since I'm so sick and it wiped me out. Yet here I am contemplating going again today. I do want to get my copy of Modern Perl autographed, afterall.

Perl's recent renaissance has produced amazing tools that you too can use today.

This talk explains the philosophy of language design apparent in Perl 5 along the two fundamental axes of the language: lexical scoping and pervasive value and amount contexts. It also discusses several important pragmas and language extensions to improve Perl 5's defaults, to reduce the chance of errors, to allow better abstractions, and to encourage the writing of great code.

Speaker: chromatic x
-- http://www.socallinuxexpo.org/scale9x/presentations/take-advantage-modern-perl

Friday, February 25, 2011

Mining of Massive Datasets textbook.

I started reading Mining of Massive Datasets on vacation. I didn't get very far into it, as it isn't exactly light beach reading. The first bit is a review covering things I mostly don't know, so that was a fun start. I now have a better feeling for IDF and TF.IDF, for instance.

Infolab seems down at the moment.

Mining of Massive Datasets.
http://infolab.stanford.edu/~ullman/mmds.html

new business opportunities

[This business opportunitiy] is a wide open space with lots of people jumping into the pool without knowing how to swim.
We should be able to make a mint selling life preservers.
--me

Thursday, February 10, 2011

LA Hadoop

Great attendance at the Los Angeles Hadoop Users Group (LA Hug) meetup last night on "Productizing Hadoop." Cloudera provided a great speaker to discuss the do's and don't's of migrating hadoop from play/development to full enterprise mode ( from hunter gatherer to modern city). The Hadoop infrastructure has come a long way since my first LA hadoop meetup 1+ year ago -- better support for multi-tenancy with auth and authz, more tools built on top of hadoop, and less need to roll-your-own scripts for everything.

Props to Shopzilla for hosting.

This was a much shyer crowd than we see at LA Perl Mongers (LA.PM). Only one other person asked a question at the end. At PM, we tend to pepper questions and feedback all along the presentation making everything a group production.

Cpanm 1.1 -- now with mirror support!

There is a new version of cpanm (App-cpanminus) that supports --mirror and --mirror-only to allow offline usage.

Kick ass! Thanks again miyagawa

cpanm 1.1 is shipped, and with `--mirror-only` option, you can use it with your local minicpan mirror, or your own company's CPAN index (aka DarkPAN).

The only reason for a few experienced perl programmers who loves cpanm but can't use cpanm offline or at work was the lack of the proper mirror index querying support.

cpanm always has required an internet connection to resolve module name and dependencies, and always relies on CPAN Meta DB and search.cpan.org to query package index.

It's been a fair requirement for 95% of the usage, but again, for an experienced hacker who spends their most of airplane's time hacking code on their laptops, the offline support to fallback to local minicpan would be really nice. (Even though many airlines nowadays provide in-flight Wi-Fi :))

So I opened a bug to support `--mirror-only` option to bypass these internet queries and parse mirror's own 02packages.txt.gz file for module resolution a while ago, and a couple of people have tried implementing it in their own branches. (Thank you!)

Today I merged one of those implementations, and improved a little bit to make it run even faster and more network efficient. The way to use it is really simple, just run cpanm with options such as:

cpanm --mirror ~/minicpan --mirror-only Plack

and it will use your minicpan local mirror as the only place to resolve module names and download tarballs from. (TIP: you can alias this like `minicpanm` to save typing)

---- http://bulknews.typepad.com/blog/2010/11/cpanm-11-hearts-minicpan-and-darkpan.html

Sunday, January 23, 2011

Recover Iphone contacts from raw backup

Just as I got my new phone ( t-mobile MyTouch4G -- love it!) my 23 month old iphone completely refused to charge from either the wall or computer. So how to get my contacts off?

I have a full mirror of my iphone (3g) filesystem, created using rsync+ssh from within my jailbroken phone. It is way cooler to backup over wifi than through a tethered cable; I had no other choice as the data connector died after 14 months. so I had no other option to make backups. I was able to charge from a wall adapter but not from any USB hosts. This crippled setup worked long enough for me to escape my AT&T contract.

Useful tidbits:

  1. contacts are stored in AddressBook.sqlitedb
  2. file is stored in /private/var/mobile/Library/AddressBook/AddressBook.sqlitedb
  3. There is a second, bare-schema database in /private/var/root/Library/AddressBook/
  4. The database is in sqlite3 format.
  5. Person entries are stored in ABPerson table
  6. Phone number/email/etc entries are stored in ABMultiValue table
We can open this file in sqlite3 and export it into a usable comma-separated-file without any other external tools. The person entries are stored in ABPerson, but the phone number entries are stored in ABMultiValue. We join the two tables together in our CSV output.

The following snippet will copy the database to /tmp,open it in sqlite3 and export to contacts.csv. # copy the db to /tmp, then open it cp /private/var/mobile/Library/AddressBook/AddressBook.sqlitedb /tmp cd /tmp sqlite3 AddressBook.sqlitedb sqlite> .mode csv sqlite> .output contacts.csv sqlite> select ROWID, first, last, identifier, value, record_id from ABPerson p join ABMultiValue mv on (ROWID=record_id) sqlite > .quit

The file locations and names was surprisingly hard to find. On the bright side, I didn't need to decode any plist files.

There are some more interesting fields in ABPerson and ABMultiValue, feel free to update the select to grab more fields.

sqlite> .tables ABGroup ABPersonMultiValueDeletes ABGroupChanges ABPersonSearchKey ABGroupMembers ABPhoneLastFour ABMultiValue ABRecent ABMultiValueEntry ABStore ABMultiValueEntryKey FirstSortSectionCount ABMultiValueLabel LastSortSectionCount ABPerson _SqliteDatabaseProperties ABPersonChanges sqlite> .schema ABPerson CREATE TABLE ABPerson (ROWID INTEGER PRIMARY KEY AUTOINCREMENT, First TEXT, Last TEXT, Middle TEXT, FirstPhonetic TEXT, MiddlePhonetic TEXT, LastPhonetic TEXT, Organization TEXT, Department TEXT, Note TEXT, Kind INTEGER, Birthday TEXT, JobTitle TEXT, Nickname TEXT, Prefix TEXT, Suffix TEXT, FirstSort TEXT, LastSort TEXT, CreationDate INTEGER, ModificationDate INTEGER, CompositeNameFallback TEXT, ExternalIdentifier TEXT, StoreID INTEGER, DisplayName TEXT, ExternalRepresentation BLOB, FirstSortSection TEXT, LastSortSection TEXT, FirstSortLanguageIndex INTEGER DEFAULT 2147483647, LastSortLanguageIndex INTEGER DEFAULT 2147483647); sqlite> .schema ABMultiValue CREATE TABLE ABMultiValue (UID INTEGER PRIMARY KEY, record_id INTEGER, property INTEGER, identifier INTEGER, label INTEGER, value TEXT);
"DBIX::Class::Deployment handler is awesome" is an article on using DBIx::Class::DeploymentHandler (along with SQL::Abstract ) to automatically produce database version upgrade and downgrade scripts from DBIX::Class schema documents and schema layout diagrams.

awesome. This is why I follow the Perl Iron Man blogging feed. Great stuff in there!

Day one with R, head first data analysis

Awesome. I installed R (r-project) about 10 minutes ago, and I just created my first scatterplot! This is a long ways from my days with p-fit and n-fit.

I'm reading Head First Data Analysis, published by the fine folks at O'Reilly. I'm enjoying reading this Head First book. Going in, I always think the asides, cartoons and irreverent colloquial manner will be off-putting, but it really does flow nicely. I look forward to comparing it to my other new O'Reilly book, Data Analysis with Open Source Tools (released in Nov 2010).

On page 291, we see this "Ready Bake Code," to pull a csv from their website, load it into R and print a scatter plot of a subset of the data.

employees <- read.csv( "http://www.headfirstlabs.com/books/hfda/hfda_ch10_employees.csv", header=TRUE)
head( employees, n=30 )
plot ( employees$requested[employees$negotiated==TRUE], employees$received[employees$negotiated==TRUE] )

Boom, I have a scatter plot of the subset of employees where the NEGOTIATED field is TRUE, comparing the requested to the received.

I did a full install onto my ubuntu laptop by adding the official r-project aptitude repository, which gave me a slightly newer version than what was available in the default Ubuntu 10.10 (Maverick) repositories. Cran asks you to manually pick a cran mirror, I chose my local UCLA mirror.
# Create /etc/apt/sources.list.d/r.list
deb http://cran.stat.ucla.edu/bin/linux/ubuntu maverick/
# add key (optional,but preferred)
gpg --keyserver subkeys.pgp.net --recv-key E2A11821
gpg -a --export E2A11821 | sudo apt-key add -
# update aptitude
sudo aptitude update
# install r
aptitude install r-base
# launch R (not 'r' -- that's a shell built-in)
R

Tuesday, January 11, 2011

LA Tech Events -- getting busy again.

After the hibernation month of December, it seems like tech events are popping out of the woodwork here in January!

Tonight (2011-01-11) is CloudCamp LA, an un-conference on all things "Cloud." It is hosted at MorphLabs in El Segundo. More than 200 people are pre-registered! All the in-person tickets are gone, but there are still 30 registrations to watch a streamed video from home. (as of 11:11am on 1/11/11)

Tonight also features a round of Lightning talks at ScaleLA: Los Angeles High Scalability Meetup (formerly Hadoop Meetup) Meetup, hosted by the wonderful folks at Mahalo in Santa Monica.

Tomorrow is the Thousand Oaks Perl Mongers, TO is a bit of a drive from down here, but I try to make it every couple of months to visit my ValueClick peeps. Not gonna happen this month, though.

Friday is TED x Caltech : Feynman's Vision (50 years later). Tilly and I will be volunteering. I'm still unsure if I can make volunteer dinner on Thursday night, for volunteers to mingle with presenters.

Next Tuesday, Steven Hawking returns for a presentation at Caltech. One wonders how much longer he'll be out in public. Caltech Alumni can register for a lottery drawing for tickets (deadline noon on 1/13), everyone else can show up and wait in line.

Wednesday the 19th brings back Los Angeles Perl Mongers, it feel like forever since our November meeting. January finds us visiting our friends at Rent.com -- thanks for hosting! My presentation is still TBD, but I hope to have the directions and presentations squared away this week. Thanks for coming!

Thursday the 20th is another wonderful Mindshare, back in the comfortable digs of The Independent theater downtown. Now with complimentary pre-event Happy Hour! Their schedule is also TBD, good to know I'm not alone on that front.

Just over the horizon to February brings SCALE -- the Southern California Linux Expo., Feb 25-27, 2011. Make your plans now!

L.A. Nerdnite also took off the month of December, as our beloved venue, the Air Conditioned Supper Club, closed or took on new management. Look for an announcement soon of a new hip venue. Who is up for a hollywood BYOB picnic experience?

SCALE presentation proposals : denied.

Sigh, Neither of my modern perl SCALE proposals were accepted -- dev track proposals for hands on demonstrations of using Hadoop Streaming with Big Data and quickly building web applications with Dancer. I hope we get an perl mongers booth/table.

I'm glad to hear there were so many presentation proposals. Sounds like we'll have some great talks!

Dear Speaker,

The SCALE committee has reviewed your proposal(s). Unfortunately, your proposal, while excellent, was not accepted. SCALE again had many high quality submissions, so we could only accept a small fraction of those submitted (47 out of 160 submissions).

We thank you for your interest in SCALE and we appreciate your submittal! We hope you'll participate in future SCALE events. The latest updates for the conference are available at http://www.socallinuxexpo.org

Monday, January 3, 2011

New Year, New Releases

I opened my cpan mail today and received a lovely email from a user of one of my CPAN modules, Hadoop::Streaming. Reading a nice comment was a wonderful way to start the first Monday of this New Year. Included with the praise was a bug report -- double plus good!

You have absolutely no idea (or perhaps you do) how happy I was to see that there is a hadoop streaming module for perl. So I thank you for making this available! I wonder if you are still working on it or have plans to continue working on it? Are there many users to your knowledge? Finally, I've tried to run the example code myself under perl 5.12.2 and receive bareword errors when running the mapper locally.

---- Frank S Fejes III

Looking at the package and the error output in his email I realized that my hastily pushed out synopsis example had not been code reviewed -- it wouldn't compile as I wasn't quoting the arguments to the Moose keyword 'with.'

It was a small matter to fix and a breeze to push to cpan via the magic of Dist::Zilla. Thanks Ricardo!

  1. clone code from github repo : git clone git@github.com:spazm/hadoop-streaming-frontend.git
  2. edit lib/Hadoop/Streaming.pm to fix the Synopsis pod
  3. add comment to Changes file
  4. commit the change locally and back to github: git commit && git push origin master
  5. magic Dist::Zilla command, dzil release, which took care of:
    1. checking for file modified files not checked in to git
    2. running pod weaver,
    3. running tests,
    4. updating the release version in Changes file,
    5. git commit Changes,
    6. git push origin master,
    7. git branch,
    8. tar+gz release,
    9. push release to CPAN

While checking my CPAN mail, I also found a CPANTS fail notice for Net::HTTP::Factual which is built on Net::HTTP::Spore. Spore v0.3 came out and changed the spec format again from v0.2 which was in turn different from v0.1.

I tweaked my factual .spec to work with Sport v0.2 or v0.3 and pushed it up to cpan. Same magic Dist::Zilla command.

Freshly available on CPAN:

Wednesday, December 22, 2010

Included in the End-of-Year message from Caltech President Chameau is this quote from Einstein, delivered in December 1930, on his first visit to Caltech.

"To you all, my American friends, I wish a happy 1931. You are well entitled to look confidently to the future, because you harmoniously combine the joy of life, the joy of work, and a carefree, enterprising spirit which pervades your very being, and seems to make the day's work like blessed play to a child."
---- Albert Einstein, December 1930

May the spirit of joy pervade your life and your work in the coming year! Let us all look confidently and move boldly into the future. Happy Solstice! Merry Christmas! Happy New Year!

Sunday, December 19, 2010

ORM -- database abstractions

I've just learned of DBIx::Class::CDBICompat, a Class::DBI compatibility layer for DBIx::Class. Awesome.

DESCRIPTION

DBIx::Class features a fully featured compatibility layer with Class::DBI and some common plugins to ease transition for existing CDBI users.

This is not a wrapper or subclass of DBIx::Class but rather a series of plugins. The result being that even though you're using the Class::DBI emulation layer you are still getting DBIx::Class objects. You can use all DBIx::Class features and methods via CDBICompat. This allows you to take advantage of DBIx::Class features without having to rewrite your CDBI code.

I have inherited an app based on Class::DBI, but I'm not terribly familiar with Class::DBI. So far, the SQL snippet approach is annoying. There are lurking bugs (in our code, not Class::DBI), but mocking and fixing with CDBI is proving to be a pain. It's good to know that if I rewrite all the DB layer in DBIxClass over the holidays, that I'll be able to shim some of the old code onto it through a compatibility layer.

Related topics:

DBIx::Class vs Class::DBI vs Rose::DB::Object vs Fey::ORM by fRew
http://blog.afoolishmanifesto.com/archives/822
PerlMonks discussion
Class:DBI vs DBIx::Class vs Rose::DB::Object
http://www.perlmonks.org/?node_id=700283
DBIx::DataModel
http://search.cpan.org/perldoc?DBIx::DataModel
"DBIxDM is very interesting and Laurent Dami helps the DBIC team maintain SQL::Abstract – but he also doesn’t manage to market it to save his life so very few people have heard of it." -- Matt Trout
Rose::DB -- hand coded to be fastest of the 4?
Marlon suggested looking more deeply into the DB benchmarks.
"You really should run the benchmarks of Rose::DB vs DBIC. Your database layer is usually your slowest in your entire application, so that’s important. RDBO is also the ORM we use at CBSSPORTS.com because speed is important at this level. I’ll be interested in looking at what the new revision of DBIC with Moose does in that arena. On a side note, I agree that the generative queries are excellent on DBIC."

_Marlon_

2006 discussion of Rose vs DBIx, by the Rose Author.
http://osdir.com/ml/lang.perl.modules.dbi.rose-db-object/2006-06/msg00021.html

Perl 2011: Where are we now?

Piers Cawley wrote an excellent forward looking piece The Perl Future in January 2009. As we approach the two year mark, how have we fared? He talks about perl 6, perl 5.10.0, aka "perl5 version 10," Perl Enlightenment & the rise of Moose, and "on frameworks and the future."

Where are we now?

Perl 5.12 is out, as scheduled, on time -- two years of work representing 750,000 lines of changes over 3000 files and 200 authors. Deprecated features of perl4&5 are finally marked as such. More Unicode improvements. Features improvements for the Y2038 bug (is epoch time 64 bit now ?) Includes pluggable keywords and syntax. A new release schedule means stable releases come out in the spring, followed by a .1 fix release, then monthly releases (on the 20th) for new bug fixes.

Perl 6 released a real, honest to goodness release candidate. Rakudo Star, "A useable perl6 release" was released in June, aimed at "Perl 6 early adopters." Rakudo star has seen monthly updates, most recently Rakudo Star 2010.11 released in November 2011. Rakudo Perl is a specific implementation of Perl 6 the language, this Rakudo Star 2010.11 release includes "release #35 of the Rakudo Perl 6 compiler, version 2.10.1 of the Parrot Virtual Machine, and various modules, documentation, and other resources collected from the Perl 6 community."

A year-and-a-half of the Perl Iron Man blogging project has seen a flurry of posts from nearly 250 perl bloggers! We've seen advocacy, snippets, whining, and community. I've seen a lot more Japanese language perl posts -- folks happy to use perl, python and ruby and pull the best from each.

I now find it strange and unsettling to meet self proclaimed perl programmers who don't use Moose. If you haven't played with it (and it does feel like playing, it's liberatingly fun), go do so now. I'll wait.

I don't know about you, but I just switched from one startup using perl to another startup using perl. Awesome perl folks are hard to find, they're mostly already busy doing work they love. Why are we using perl? -- because perl works, it scales with developer time, and perl is beautiful.

Piers mentioned frameworks -- yes individual frameworks are important but the vast armada of options available at CPAN as a whole provide an immense multiplier on developer productivity. It's so massive, it is easy to overlook -- Doesn't everyone have a massive, distributed, user-written body of code with excellent testing methodology available at the touch of a button?

Merry Christmas to all!

[...snip...]
However, if you look at the good parts (O'Reilly haven't announced "Perl: The Good Parts", but it's a book that's crying out to be written), there's a really nice language in there. Arguably there's at least two. There's the language of the one-liner, the quick throwaway program written to achieve some sysadmin related task, and there's the more 'refined' language you use when you're writing something that is going to end up being maintained.

I think it's this split personality that can put people off the language. They see the line noise of the one liner school of programming, the games of Code Golf (originally called Perl golf, the idea spread), the obfuscated Perl contests, the terrible code that got written by cowboys and people who didn't know any better in the dotcom bubble (you can achieve an surprising amount with terrible Perl code, but you will hit the wall when you try and change it) and they think that's all there is.

But there is another Perl. It's a language that runs The Internet Movie Database, Slashdot, Booking.com, Vox.com, LiveJournal and HiveMinder. It's a language which enables people to write and maintain massive code-bases over years, supporting developers with excellent testing and documentation. It's a language you should be considering for your next project. It's also something of a blue sky research project - at least, that's how some people see Perl 6.

----http://www.h-online.com/open/features/Healthcheck-Perl-The-Perl-Future-746527.html

Wednesday, December 15, 2010

A story of one man's journey to Vim Nirvana (Vimvana). For those of you stuck on Monday, keep trying you'll make it.
I was watching a violinist bow intensely and I had this thought: I probably have as many brain cells devoted to my text editor as he does to playing his chosen instrument. Is it outlandish to imagine that an MRI of his brain during a difficult solo wouldn’t look much different than mine while manipulating code in vim?

Consider, if you will, the following montage from one vimmer’s journey.

---- http://kevinw.github.com/2010/12/15/this-is-your-brain-on-vim/

Tuesday, December 14, 2010

Perl Advent Calendars

It's Advent Calendar time in the perl ecosystem! Start each day with a delicious treat of knowledge.

I've found a half dozen english language perl advent calendars, starting with the original perl advent calendar. For extra fun I've included another half dozen Japanese language calendars -- I can still read the perl it's just the prose that is lost in translation.

Perl Mongers Perl Advent calendar
http://perladvent.pm.org/2010/
Catalyst Advent Calendar -- The Catalyst Web Framework
http://www.catalystframework.org/calendar/
Perl Dancer -- the Dancer mini web framework
http://advent.perldancer.org/2010
Ricardo's 2010 advent calendar -- a month of RJBS
http://advent.rjbs.manxome.org/2010/
UWE's advent calendar - a cpan module every day.
http://www.perl-uwe.com/
Perl 6
http://perl6advent.wordpress.com/
Last Year's Plack calendar
http://advent.plackperl.org/
For the adventurous: Japanese Perl Advent Calendars, 8 different tracks!
http://perl-users.jp/articles/advent-calendar/2010/
Hacker Track
Casual Track
English Track
Acme Track
Win32 Track
Meta Advent Calendar Track
Symbolic Programing Track
perl 6

One bonus list, for the sysadmin in your life:

SysAdvent - The Sysadmin Advent Calendar.
http://sysadvent.blogspot.com/

Monday, December 13, 2010

Vimana : cpan module to automate vim plugin installation

VIMANA! The Vim script manager. A cpan module for downloading and installing vim plugins! It works with .vim files, archive files (zip, rar), and vimball formats. By c9s / cornelius / Yo-An Lin. caveat: the "installed" command only recognizes plugins installed via vimana.

Cornelius's perl hacks on vim presentation has been on slideshare for two years. It covers "why you should improve your editor skills" -- ranging from "stop moving around with the arrow keys" to advanced commands and plugins. He's written quite a few vim plugins and quite a lot of cpan modules. Props!

Vimana Example:

% cpan Vimana
% vimana search nerd
nerd-tree-&-ack     [utility]      Adding search capability to NERD_Tree with ack
the-nerd-tree       [utility]      A tree explorer plugin for navigating the filesystem
nerd-tree-project   [utility]      It tries to find out root project directory, browse project file with NERD_tree.
the-nerd-commenter  [utility]      A plugin that allows for easy commenting of code for many filetypes.
findinnerdtree      [utility]      Expands NERDTree to file in current buffer

% vimana info the-nerd-tree
#... shows the install instructions ...

% vimana install the-nerd-tree
Plugin will be installed to runtime path: /home/andrew/.vim
Package the-nerd-tree is not installed.
Downloading plugin
.
 - Makefile : Check if makefile exists. ...not ok
 - Meta : Check if 'META' or 'VIMMETA' file exists. support for VIM::Packager. ... - Rakefile : Check if rakefile exists. ...not ok
Package doesn't contain META,VIMMETA,VIMMETA.yml or Makefile file
Copying files...
/tmp/yohBObI3iy/ => /home/andrew/.vim
Updating helptags
Done

There are quite a few plugins mentioned in the presentation. I've listed the ones I'm interested below. I'll be installing and reviewing them soon. Slide 120 begins a nice section on advanced movement keys. Slide 135 has a list of the variables controlling the perl syntax highlighter.

Perl folding variables:

" set :help folding for more information
:set foldmethod=syntax              " enable syntax based folding
let perl_include_pod=1              " fold POD documentation.
let perl_extended_vars=1            " for complex things like @{${"foo"}}
let perl_want_scope_in_variables=1  " for something like $pack::var1
let perl_fold=1                     " enable perl language based folding
let perl_fold_blocks=1              " enable folding based on {} blocks
---- slide 135

Vim Plugins:

Exciting vim plugins to check out from vim.org. Reviews and usage information to come in future posts. I'm excited to try Cornelius's updated OmniCompletion helpers.

perlprove.vim
http://www.vim.org/scripts/script.php?script_id=1319
How does this compare with efm_perl.pl, included with vim?
DBExt.vim
http://www.vim.org/scripts/script.php?script_id=356
Run database queries from inside vim. I skipped past this the first time around, but now I see that it will let you copy the query directly from your source language, apply that languages string mechanics to get the output string and then prompt for bound variables. interesting!
FuzzyFinder
http://www.vim.org/scripts/script.php?script_id=1984
Allows a handy shorthand mapping to search for files/buffers/etc.
the NERD tree
http://www.vim.org/scripts/script.php?script_id=1658
updated file listing explorer
You're already using this, right?
the NERD commentor
http://www.vim.org/scripts/script.php?script_id=1218
improved commenting, under current development.
taglist
http://www.vim.org/scripts/script.php?script_id=273
a ctag integration that shows tag information for the current file / source code browser
"The most downloaded and highest rated plugin on vim.org"
BufExplorer
http://www.vim.org/scripts/script.php?script_id=42
Buffer Explorer / Browser -- easily switch between buffers.
Git-Vim
https://github.com/motemen/git-vim
Git commands within vim.
HyperGit
http://www.vim.org/scripts/script.php?script_id=2954
a git plugin for vim ( with a git tree menu like NERDtree ), by c9s.
screenshot
Updates to vim omnicompletion for perl:
http://www.vim.org/scripts/script.php?script_id=2852
Demonstration video
https://github.com/c9s/perlomni.vim
AutoComplPop
http://www.vim.org/scripts/script.php?script_id=1879
Automatically opens popup menu for completions

TEDxCALTECH -- Friday, January 14, 2010

"Feynman's Vision -- The next 50 years"

TEDx, the independent TED event series is coming to Caltech in January. TEDx events are inspired by TED and use the same plans and speaking formats. I'm surprised I haven't heard more buzz about this event. I wasn't able to get into TEDxUSC, the first of the TEDx events. I'll be volunteering for the event and hope to see you there!

Will we see coverage of Feynman's vision of the "race to the bottom" -- his challenge to his engineering and scientist peers to work together and compete to see how small we can go in nanotech. Will anyone from Professor Tai's micro-machine lab be speaking? I'm sure we'll see interesting wetware - bio/cs/eng research, given the Institute's focus on Bio over the past decade.

You won't know until you go! (Or until you check the speakers tab)

Feynman's Vision -- The next 50 years.

TEDxCaltech is a community of people who are passionate about sharing "Feynman’s Vision: The Next 50 Years." If that sounds like something you want to be a part of, complete and submit the application below. Due to limited venue space, we cannot approve all applicants instantaneously. If you are approved, you will receive an email shortly either inviting you to register for the event, or letting you know that you are on the waiting list. The registration fee is $25 for Caltech students; $65 for Caltech faculty, staff, postdocs, alumni and JPL; and $85 for all others. The all-inclusive day will begin with breakfast and will be punctuated with generous breaks for food and conversation with fellow attendees. It promises to be an exciting and entertaining intellectual adventure—a time to unplug from the day-to-day routine. You won’t want to miss a minute!

Registration opens at 8:00 am, doors open at 9:30 am, talks conclude at 6:00 pm and will be followed by a reception. Parking is free.
-- http://www.tedxcaltech.com/apply

Thursday, December 9, 2010

perl, tags, and vim : effective code browsing.

Adding a tags file to vim makes for an effective code browser for perl.

I've just started a new job, so I have a large new repository of perl code and modules to familiarize myself with. I've taken this as an opportunity to refresh my tag-fu in vim. After creating a tag file with ctags [exhuberant ctags], I can now jump around my whole perl repo from within my vim session.

The -t command-line flag to vim opens files by tag (module) name, e.g. vim -t My::Module::Name. Within a vim session, I jump to the definition of a function by hitting ctrl-] with the cursor over a usage of the function, even if that definition is another file! Ctrl-t and I'm back where I started.

Today I found the -q flag to ctags, which adds the fully qualified tag for package methods, e.g. My::Package::method_1, which aids with long package names and the ctrl-] key. FTW!
I have this set of ctag flags aliased as "ctagit":


ctags -f tags --recurse --totals \
--exclude=blib --exclude=.svn \
--exclude=.git --exclude='*~' \
--extra=q \
--languages=Perl \
--langmap=Perl:+.t

In my .vimrc file, I defined the tags search path as ./tags, tags,~/code/tags ;, this will look for a tags file in the directory of a loaded file, then the current directory, and then a hardcoded path in my code area.

" [ .vimrc file ] " set tag search path: directory of current file, current working directory, hard-path set tags=./tags,tags,~/code/tags

More info on using tags in vim is available in :help tags. I've found the following commands useful.

ctrl-]jump to tag from under cursor
visual ctrl-] jump to tag from visual mode
:tag tagname jump to tag tagname
:stag tagname split screen and open tagname
:tags show the tag stack, '>' labels the current tag
:ctrl-t jump to [count] older entry in the tag stack
:[count]pop jump to [count] older entry in tag stack
:[count]tag jump to [count] newer entry in tag stack

Update:
To configure vim to treat ':' (colon) as part of the keyword to match Long::Module::Sub::Module package names, add it to the iskeyword setting. I have multiple perl filetype hooks stored in files in .vim/ftplugin/perl/. These filetype hooks are enabled with the filetype plugin on directive in my main .vimrc file.

" [ .vimrc file]
"enable loading the ftplugin directories
filetype plugin on
" [ .vim/ftplugin/perl/keyword_for_perl file]
" Append : to the list of keyword chars to allow completion on Module::Names
set iskeyword+=:

The same effect could be conjured directly in the .vimrc file via autocmd:

" append colon(:) to the iskeyword list for perl files, to enable Module::Name completion.
autocmd FileType perl set iskeyword+=:

My Configuration files are available in the spazm/config git repository on github.

Monday, November 29, 2010

NoNaNoWriMo


Seems I have just said "no," to blogging in November.  I have not been off writing the great American novel.
This drops me back to  paperman status for perl ironman blog competition.  Time to start back up?