Sunday, August 29, 2021

LDAP: peering behind the curtain.

LDAP is mysterious and opaque to me. Deep dark magic, etc.

Summary

Jump down to see perl and ldapsearch examples for querying an ldap server as well as example search filters.

Long rambling background journey

I've tried a few times over the years to poke around -- notably making that Dancer example to authenticte via ldap. But I've never had, you know, authz and approval to futz with an LDAP. Even figuring out the appropriate search and base thingamijig is unclear. This is not an inviting protocol.

I needed to poke around a client LDAP install -- maintained 12 timezones away. I looked at ldapsearch for a few minutes before realizing it didn't enacapsulate any of that complexity. But we'll get back to that.

Next I turned to perl, specifically the CPAN pages for Net::LDAP and Net::LDAPS. This simplified a little of the interface and packaged up the results. Removed just enough complexity for me to get started.

Aside: I love, love, love, the SYNOPSIS section of perl documentation. Please, steal that idea for your language and docs!!! Synopsis gives a code snippet showing usage, and it's probably the usage that brought you to the module in the first place. At least enough of a skeleton to hang your code upon.

To use LDAP: one connects to a server, binds parameters to authenticate if needed, and issues a query. Net::LDAP keeps connect and bind as separate steps. But really the code is the easy bit. Server, user, password, query. What do these even look like?

You'll see examples that skip authentication. Why? Understanding authentication requires understanding the schema for your LDAP data. So we don't just say, "log me in as user foo," no we get to say "log me in as 'uid=foo,ou=Users,o=org,dc=...'". Yikes!

I'm connecting to a managed LDAP service, so they provide some of these settings. Specifically I am connecting to jumpcloud.com with an organization id. This gives me a base of "ou=Users,o=$org_id,dc=jumpcloud,dc=com". Ldap uses a concept of base_dn that we can use to talk about relative locations of the ldap data. For any query I make to this LDAP server, I'll need this base_dn as a reference. "ou" => organizational unit. "o" => organization. "dn" => distinguished name. "dc" => custom attribute used by this service. See already we are in custom territory.

Perl from the command line is a fun way to test and experiment. "shell is our repl" or something. I define a bunch of env vars (oh c'mon I used hard coded strings while poking). And this got me connected and poking and able to determine what I wanted to use for a filter value.

A filter of "(uid=*)" will return all users (every record with a uid). The parens are important and part of the query filter. My actual use case involved a more complicated query and custom extraction and reporting, so I appreciated having the data available programatically.

My initial problem was finding users that did not have a display name set. This was supposed to be an invariant in the upstream data. The app we were configuring used the displayName to greet the user and failed spectacularly when it was not present.

Filtering this seems simple enough "show me the users that have a blank displayName" "Have a blank displayName" is not a valid idea for a filter. We can ask for all records that do not have a set displayName, using the unary boolean not ! => (!(displayName=*)). Also, we only want user records.
Voila: (&(uid=*)(!(displayName=*))).

Perl one liner

 
       % export LDAP_USER=ldap_username
       % export LDAP_PASS=t00manys3cr3ts
       % export LDAP_SERVER=ldap.jumpcloud.com
       % export LDAP_BASEDN=ou=Users,o=org_id_string,dc=jumpcloud,dc=com
       % perl -MNet::LDAPS -M5.20.0 -E 'my $ldap = Net::LDAPS->new($ENV{LDAP_SERVER}) or die "$@";
           my $dn = $ENV{LDAP_BASEDN}; 
           my $msg = $ldap->bind("uid=$ENV{LDAP_USER},$dn", password=> $ENV{LDAP_PASSWORD}); 
           my $filter = "(uid=*)";
           my $srch = $ldap->search(base=>$dn, filter=>filter, attrs => []); 
           for my $e ($srch->entries) { $e->dump }'

     

Filters:

Filter syntax has its own RFC 4515. Key take aways: rather lispy:
  1. parens are always required
  2. operator comes before operands (prefix notation, aka polish notation).

(objectClass=*)      #  a default filter to return all records.
(uid=*)              #  all records with a uid field.
(sn=Smith)           #  all records with last name of "Smith" (probably only user records)
(&(uid=*)(sn=Smith)) #  all users with a last name of "Smith"
(&(sn=Smith)(memberOf=cn=Agents,${LDAP_BASEDN})) #  all Agent Smiths
    - note that the group name is also fully qualified with the base dn.
    

(&(!(displayname=*))(uid=*))  #  users without a display name set

ldapsearch

Now that I have a working query, can I make the same query using ldapsearch? Again, once we figure out the login incantation, it's home sailing.

    % export LDAP_USER=ldap_username
    % export LDAP_PASS=t00manys3cr3ts
    % export LDAP_SERVER=ldap.jumpcloud.com
    % export LDAP_BASEDN=ou=Users,o=org_id_string,dc=jumpcloud,dc=com
        
    # show full record for $LDAP_USER
    #   -w binds the password.  -W would query for password
    #   -D defines the "bind DN", the fully qualified user for authentication
    #   -b defines the "base DN", sets base for filter values, does not affect bind DN
    #      setting LDAP_BASEDN environment var did *not* have any effect upon the base dn.
    #   filter value.  the base DN is appended to the LDAP_USER search value.
    
    % ldapsearch -h $LDAP_SERVER -w"$LDAP_PASS" -D"uid=${LDAP_USER},${LDAP_BASEDN}" -b"$LDAP_BASEDN" "(uid=${LDAP_USER})"
        
        # get the groupNumber of the login user:
        #    -LL selects a shorter LDIF format for record output
        #    return only the "gidNumber" field
        
        % ldapsearch -h $LDAP_SERVER -w"$LDAP_PASS" -D"uid=${LDAP_USER},${LDAP_BASEDN}" -b"$LDAP_BASEDN" -LL "(uid=${LDAP_USER})" gidNumber
            
         version: 1
         
           dn: uid=ldap_sername,ou=Users,o=org_id,dc=jumpcloud,dc=com
           gidNumber: 5418     
Future me, you're welcome.

Friday, July 2, 2021

We're baaaaaack!

Lowlevelmanager.com domain is back online!

Godaddy allowed a ring of foreign IPs to access my account and modify domains.

Most actions would have triggered warning emails. Modifying DNS records does not. Nor are changes to DNS logged. Their actions did not trigger any warnings or any fraud detection at all. Who even allows admin traffic from russia, china, or brazil here in the 2020s?

I only found out when I googled and found zero results for the site. A view on the wayback machine shows a redirect to some sort of scammer referral site. Apologies if anyone of my lovely readers ended up there. Then godaddy moved to parking the domain. The did not inform me, nor share any generated ad revenue.

On the bright side, this kicked me to get my last couple of domains off of Godaddy.

Just say No, Godady is an unethical and sexist entity that should be avoided, in my opinion.

Tuesday, June 16, 2020

Profiling vim start up time

After updating my vim plugins and adding a few new ones, I found my start up time became unreasonably slow. I blamed my tag generating plugin because of how the slowdown only occurred when opening vim with a file. That was a red herring, removing it didn't speed things up. Checking that projects issues I found that it was already switched to be asynchronous.

While investigating, I found a reference to the --startuptime option in vim. This option creates a log file containing entries for each startup action: sourcing files, executing plugins etc with timing information showing start and end times for each action, measuring relatively from the start and from the previous command. It's a lot of useful information for investigating which plugins are slowing startup time, reported in a useful format. Thumbs up to vim1

% vim --startuptime /tmp/startup-time.log
% vim --startuptime /tmp/startup-time-vimrc.log ~/.vimrc

Wow, loading all these new plugins was taking a while. 800ms in the first case but 3200ms when loading a file! More files are sourced in the latter case as the FileType files need to be sourced and applied.

Timing

starting time without a file load

471.134  121.649: VimEnter autocommands
471.137  000.003: before starting main loop
477.200  004.424  004.424: sourcing /home/andrew/.vim/bundle/vim-fugitive/autoload/fugitive.vim
481.049  002.378  002.378: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar.vim
481.569  000.173  000.173: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/debug.vim
851.768  000.774  000.774: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/types/uctags.vim
852.226  000.177  000.177: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/prototypes/typeinfo.vim
861.475  382.412: first screen update
861.476  000.001: --- VIM STARTED ---

Starting time loading .vimrc

585.708  181.604: VimEnter autocommands
585.715  000.007: before starting main loop
595.217  000.623  000.623: sourcing /home/andrew/.vim/bundle/vim-airline/autoload/airline/async.vim
606.194  006.914  006.914: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar.vim
606.877  000.122  000.122: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/debug.vim
3057.382  001.216  001.216: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/types/uctags.vim
3057.895  000.191  000.191: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/prototypes/typeinfo.vim
3065.474  000.258  000.258: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/prototypes/fileinfo.vim
3242.091  000.213  000.213: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/state.vim
3249.747  2655.495: first screen update
3249.750  000.003: --- VIM STARTED ---

The big difference: missing time between loading tagbar/debug and tagbar/types/uctags.vim, 370ms vs 2400ms. The files are getting sourced quickly, but something is causing an external delay.

So maybe it is the tags generating plugin locking the tags file? But a tags file neither exists nor is generated in the directory containing .vimrc. Very curious indeed.

The debug file (tagbar/debug) file defines several function! functions, and has one minor branch on if has('realtime'). Nothing suspicious at all.
Similarly, the uctags file (tagbar/types/uctags.vim) creates a single function function! tagbar#types#uctags#init(supported_types) abort which doesn't seem to be affected by the currently loaded file.


480.725  101.452: VimEnter autocommands
480.732  000.007: before starting main loop
490.939  000.468  000.468: sourcing /home/andrew/.vim/bundle/vim-airline/autoload/airline/async.vim
498.378  003.465  003.465: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar.vim
499.003  000.131  000.131: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/debug.vim
2947.025  000.461  000.461: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/types/uctags.vim
2947.325  000.041  000.041: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/prototypes/typeinfo.vim
2954.640  000.096  000.096: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/prototypes/fileinfo.vim
3179.663  000.203  000.203: sourcing /home/andrew/.vim/bundle/tagbar/autoload/tagbar/state.vim
3188.567  2703.970: first screen update
3188.572  000.005: --- VIM STARTED ---


330.198  006.165: BufEnter autocommands
330.201  000.003: editing files in windows
330.967  000.694  000.694: sourcing /home/andrew/.vim/bundle/YouCompleteMe/autoload/youcompleteme.vim
442.792  111.897: VimEnter autocommands
442.797  000.005: before starting main loop
449.255  000.333  000.333: sourcing /home/andrew/.vim/bundle/vim-airline/autoload/airline/async.vim
462.352  019.222: first screen update
462.358  000.006: --- VIM STARTED ---

Summary

Disabling the tagbar plugin gets me back my missing 2.5 seconds.

Tagbar has Issue #477 open for this delay during startup, it is running ctags synchronously under the hood on startup.

Further debugging

The syntastic and YouCompleteMe plugins each add ~200ms to startup time. Removing all three lowers my start time to 180ms which feels fast, as there is no noticable lag after the first paint of the screen.

I really enjoy the benefits of live linting with syntastic. Moving to Ale, an asynchoronus linter speeds up startup considerably with a ~30ms load time. I'll switch for a while and see how it compares.

Monday, June 1, 2020

Updating syntax highlighting imports to CDN (alexgorbatchev)

TL;DR:

replace:
https://alexgorbatchev.com/pub/sh/current/
with:
https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83

Syntax highlighting


For many years I have used a javascript based syntax highlighter on this blog. Including it directly from the author Alex Gorbatchev's site alexgorbatchev.com. Lately his site has been timing out behind Cloudflare (522) and slowing my initial render.

I've fixed the links to point to the wonderful javascript mirror Cloudflare provides at https://cdnjs.cloudflare.com/ajax/libs. This should fix the loading issue. I also took this opportunity to move to https loading for the embed, to fix any mixed-content warnings. Apologizes to any patron who was inconvenienced.

The Cloudflare page for this packages is https://cdnjs.com/libraries/SyntaxHighlighter, which shows all of the available files within the package.

A three-year-old answer on this lowly rated stackoverflow question provided details on the new link location.

An expansive view of back-end engineering

Originally written for an introduction email/cover letter, this ran well over blog length. I moved it here and aggressively edited the email. The email is still way too long. I get loquacious when I'm passionate and excited.
Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.
-- Blaise Pascal

I didn’t have time to write you a short letter, so I wrote you a long one.
-- Mark Twain

I, Engineer:

I am an excellent back-end Sr. Engineer experienced in a range of start-up sizes and domains. I enjoy digging into problems of scale and expanding a working system for the next 10x.

Back-end engineering requires systemic thinking and knowledge across a wide spectrum. These range from system design to production facing development to monitoring and metrics for support to data extraction and analysis to ops. I understand the big picture and also love to dig into the technical details and implementation. I build on my background as an Electrical Engineer and am happy to discuss anything in the the stack down to the metal ... oxide in the gates of the chips.

I am comfortable and extensively experienced with all the pieces of a well run system:

  • micro and macro service development including analysis of effectiveness and efficiency. A polyglot, most recently using Python.
  • Extracting loosely coupled services from a monolith.
  • metrics, monitoring and alerts to track and improve the health of the system.
  • RDBMS schema design with table and query optimization.
  • Trade offs in design and implementation of nosql cache and secondary stores.
  • Code lifetime tracking and shared ownership and support.
  • Extracting and exporting data, reporting, building a pipeline into reporting stores from OLAP to data lakes.
  • ops from provisioning raw metal to EC2 to docker image and containers.

I care deeply about removing accidental complexity and the resulting friction. Improving overall velocity and trust in the system through investments in tests, logs, monitoring and metrics. Instituting shared code ownership through testing, reviews, documentation and knowledge transfer with clear paths and branching from development, staging, QA, and production. Building a culture of continuous learning and exploration, where we all improve.

I was asked last week, "What is my dev super power?" In the moment, I said, "modesty," which is quite a pivot from the required interview energy of "I AM SO GREAT!" to humility and empathy. I listen and learn. I am kind and effusive when people come to me with questions and work together to find answers, and show how to seek further answers. I want to help everyone around me so we can all do more. I earn respect from my peers both through technical excellence and deep domain knowledge but also from consistency of character, respect and care. I invite you to reach out to my friends and coworkers to hear their opinions of my personality and character, ask for an anecdote of working together formally or informally.

There are so many "I" statements here. I am more interested in "we." I join early stage start-ups and small companies where I have the leverage to influence and expand "we." I excel at intra-team and inter-team communication, at finding and filling the cracks in the path from customer need to customer success. Building "we" in dev means realizing it is "we" not "they" with product, qa and ops, management, executives and the customer. Camaraderie is built upon mentoring, training and learning, shared purpose and unified direction. "We" also includes sharing with the community outside our walls. This is why I run and attend meet-ups, speak and conferences and volunteer with open source. This is an responsibility and obligation as a senior member of the community, to reach back and help more people climb into our club

I have the underlying fundamental skills in linux, git, apache/nginx, wsgi, postgres, mongo, elastic, memcache, redis, riak, rabbitmq and kafka, and a wide array of amazon: EC2, S3, RDS, EMR, dynamodb, sqs/sns/kinesis, redshift, athena, and etc.

A pleasure to make your acquaintance.

Peace,

Andrew

blog: lowlevelmanager.com splash: rockstär.dev






Friday, May 1, 2020

Rust playground from your own gist

The rust playground is awesome.

It allows playing with rust in the browser, without needing to install anything locally. These playgrounds / playpens are popular with newer languages. I've definitely seen with rust, go, kotlin, and others. Javascript ones allow script and css and other elements.

When editing a playground, the file can be modified, compiled, and run. The file can be exported as a direct link with the code (if it is short enough to fit in a get param). The code is also stored as a github gist under the "rust-play" user, and export links are provided to view the gist and to load a permalink to the playpen with that gist.

I haven't seen this documented anywhere, but we can use our own gist in the link!

Permalink looks like this:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=23727a722bff54fd20f44ef43b96466b

version, mode and edition options configure the settings for the playground.
The gist option specified the gist to load.

Advantages to using your own gist:
  • Ownership and control
  • Notification of comments
  • Tracking of forks
  • Updates of gist are reflected in the playground
Disadvantages:
  • Edits made in the playground are lost.  The playground doesn't have update access to the gist.
  • Unexpected experience for consumers?

Try out my Playground link for gist 23727a722bff54fd20f44ef43b96466b

Embedding a single file from a github gist

Embedding a github gist


Github has improved their gists by directly providing an embed script.

This is pretty handy. From the sharing dropdown on the gist, choose embed and then copy the script tag. Paste that directly into your html (say on your super minimal blogging platform, like blogger/blogspot).

The script will embed the gist with syntax highlighting and links to view the raw gist.

Format of the script tag:
<script src="https://gist.github.com/${username}/${gist_id}.js"></script>

username is optional and can be skipped:
<script src="https://gist.github.com/${gist_id}.js"></script>


Embedding a single file from a gist


When our gist contains multiple files, the default embed will include all of the files. (Sorted by filename, just like in the gist). You can embed just a single file by adding the filename to a file get param.

Format of the script tag:
<script src="https://gist.github.com/${username}/${gist_id}.js?file=${filename}"></script>

The username path can be skipped:

<script src="https://gist.github.com/${gist_id}.js?file=${filename}"></script>

Embedded file example


Embedding from file is_valid_sequence.rs from my gist 23727a722bff54fd20f44ef43b96466b

<script src="https://gist.github.com/23727a722bff54fd20f44ef43b96466b.js?file=is_valid_sequence.rs"></script> 

toy interiew problems in rust

2019 was my "Year of Rust".  I'd hoped to blog about that, yet here we are. :)
 
 Recently I've been doing random problems at random "coding-interview" sites.  This is a bit like working musical scales. Practice in flexing the low level muscle memory to allow it to fade into the background and allow thinking at a higher level.

  I mostly do them in python to work out the kinks in my algorithm (seriously, they are all about fixating on annoying edge cases.)  And then a few I work out in rust.

Problem


Today's challenge: Check If a String Is a Valid Sequence from Root to Leaves Path in a Binary Tree on leetcode.com

Check If a String Is a Valid Sequence from Root to Leaves Path in a Binary Tree

Given a binary tree where each path going from the root to any leaf form a valid sequence, check if a given string is a valid sequence in such binary tree.
We get the given string from the concatenation of an array of integers arr and the concatenation of all values of the nodes along a path results in a sequence in the given binary tree.



Solution



A few things to note about my solution:
  • The recursion is pretty simple and makes lovely use of the match syntax
  • rust match syntax doesn't allow matching a slice into (head, tail) as one might be used to in other pattern match languages or a lisp where all arrays are built up from recursively defined two element lists, e.g. (a, (b, (c, (d, nil))))
  • During development, I worked out all the edge case logic with the recursive calls commented out and short circuited. After getting that all working, I dug into the type issues with my intial naive attempt at executing the recursion.
  • extracting a value from a Option<RC<RefCell<T>>> requires jumping through a couple of hoops.
  • My internal recursive function has a slightly different signature than the public function. In my python solution, I was able to use the primary interface in my recursion code. Passing literal vecs as borrowed from the RefCell was just never going to work.

tests

Now let's add some tests!
The leetcode interface handles converting tests from a text form, populating the tree and then running the algorithm. Details of their population routine are not available. Adding tests manually is ... annoying. Which makes it a good practice exercise for building RC

Rust Playpen

Full code available to edit and play with on the rust playpen

Tuesday, November 5, 2019

Update tmux default directory for new windows

TIL:
  • the -c flag to attach-session will update the default current working directory for new windows.
  • attach-session can be run from the tmux command prompt with an arg of -t . to connect to the current session,
    so we don't need to detach and reattach.
  • attach also supports -c
  • tmux makes the current pane's working directory available in the command prompt as '#{pane_current_path}'
  • new-window also supports the -c option.
Set the default CWD for new windows to the current directory:

From the tmux command prompt:

:attach -c '#{pane_current_path}'

# or the more verbose attach-session version,
# listed because I thought of this first :)
:attach-session -t . -c '#{pane_current_path}'


Bound to a char in .tmux.conf:

# update default path for new windows to the current path 
bind-key 'M-/' :attach -c '#{pane_current_path}'

# Open a new window using the current working directory,
bind-key 'C' new-window -c '#{pane_current_path}'

For reference: see this StackOverflow comment and this Unix StackExchange answer.

Saturday, February 9, 2019

Hacktoberfest 2018

For Hacktoberfest 2018, I set a goal of completing both the primary contest and at least one side quest. I succeeded!

Hacktoberfest is a fun "contest" sponsored by DigitalOcean and Github each October to encourage open-source contributions. If you make enough pull-requests after signing up, then you win a tshirt and some stickers. This year the primary goal was 5 pull-requests on any public github repositories. I recommend it as a fun way to get in the practice working on open source software and pushing pull-requests. I'll remind you to sign up next October.

As my side quest, I completed the microsoft challenge -- one pull request on any of their repos plus filling out a form. Today I found their email in my inbox with the redemption code, it arrived almost two months ago. So now my shirt is on the way (albeit not in my preferred size, as XL was the only one out of stock)! Similarly it took a month for me to notice the confirmation for the primary contest and confirm my shirt. Note to self: clean up your email!

Thanks to Kivanc, we even had an office hacktoberfest hack-night.

See my PRs from October, 2018.

Hi Blog!

It's been a dark few years here on the "darkest timeline." I'm going to let my light shine -- I'm hopeful that I'll write some fun tech content in the near future detailing my current adventures.

I have a full docket of books and videos I'm reading and watching on safari. Triggers thoughts about how to amalgamate the knowledge into a lecture/presentation. Assuming I keep up this newly recovered energy and excitement, you'll see more here soon.

peace!
AG

Friday, September 16, 2016

But these little setbacks are sometimes just what we need to take a giant step forward, Right Kent?

Wednesday, May 25, 2016

maintaining SSH_AUTH_SOCK in tmux

Tmux has a neat feature where certain environment variables can be updated during attachment.

SSH_AUTH_SOCK is included by default. When you reconnect from within a new ssh environment, SSH_AUTH_SOCK is updated inside of tmux. Any new windows created in the tmux session will have the updated ssh information.

Already running shells are not updated (how would tmux tell the shell to update?). I've added a zsh alias to my workflow to pull in the updated value from a running shell. It wraps the show-environment tmux command. This is a shell alias because it needs to affect the running shell.

fix_ssh () {
        eval $(tmux show-environment | grep ^SSH_AUTH_SOCK)
}

I don't normally start many interactive ssh session from my remote box, but I do need to talk to my local SSH agent to connect to my private git repos. I hate running an update and seeing Permission denied. Before I added this fix_ssh command, I caught myself opening new tmux windows to run the fetch from and then closing them to return to my active shell -- an expensive and distracting work-around.

% git fetch
Permission denied (publickey).
fatal: Could not read from remote repository.
% fix_ssh
% git fetch
# success!

When I run with screen, I set the SSH_AUTH_SOCK value to a steady value before opening the initial screen session and then manually update the sock on each login. When I remember.

# for screen
MY_SSH_AUTH_SOCK=$HOME/.ssh/auth_sock
rm $MY_SSH_AUTH_SOCK && ln -s $SSH_AUTH_SOCK $MY_SSH_AUTH_SOCK && export SSH_AUTH_SOCK=$MY_SSH_AUTH_SOCK

Tuesday, February 9, 2016

Interview with an adware author.
It was funny. It really showed me the power of gradualism. It’s hard to get people to do something bad all in one big jump, but if you can cut it up into small enough pieces, you can get people to do almost anything.

http://philosecurity.org/2009/01/12/interview-with-an-adware-author

Sunday, February 7, 2016

how-to: run git interactive rebase non-interactively

TL;DR

git alias to autosquash fixup commits non-interactivly:
git config --global alias.fixup '!GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash'

non-interactive interactive rebase

In the normal workflow, git interactive rebase presents the user with a document to edit interactively to modify and reorder commits. What if you want to run it non-interactively?

Yes, you can do this. And yes I have a use-case!

I'd like to apply/squash all my --fixup commits automatically without spending time in that editor screen. This is an easy use case because I don't change anything in the editor window, that's handled by the --autosquash flag to rebase.

-i
--interactive
Make a list of the commits which are about to be rebased. Let the user edit that list before rebasing. This mode can also be used to split commits (see SPLITTING COMMITS below).

The commit list format can be changed by setting the configuration option rebase.instructionFormat. A customized instruction format will automatically have the long commit hash prepended to the format.
-- git rebase documentation

Git respects the traditional unix environment variables $EDITOR and $VISUAL. Overriding one of those will change the editor that is run during interactive rebase but also changes the editor used while in the rebase to change commit messages and etc.

A third environment variable was added by Peter Oberndorfer: $GIT_SEQUENCE_EDITOR. This editor is only used for the interactive rebase edit. As an aside, this is a wonderful commit message.

"rebase -i": support special-purpose editor to edit insn sheet

The insn sheet used by "rebase -i" is designed to be easily editable by any text editor, but an editor that is specifically meant for it (but is otherwise unsuitable for editing regular text files) could be useful by allowing drag & drop reordering in a GUI environment, for example.

The GIT_SEQUENCE_EDITOR environment variable and/or the sequence.editor configuration variable can be used to specify such an editor, while allowing the usual editor to be used to edit commit log messages. As usual, the environment variable takes precedence over the configuration variable.

It is envisioned that other "sequencer" based tools will use the same mechanism.

Signed-off-by: Peter Oberndorfer
Signed-off-by: Junio C Hamano

-- http://git.kernel.org/cgit/git/git.git/commit/?id=821881d88d3012a64a52ece9a8c2571ca00c35cd

Did I just know all this? No, not really. I hadn't heard of GIT_SEQUENCE_EDITOR until reading the code for the silly little git --blame-someone-else script going around. That gave me the keyword to search to find this excellent Stack Overflow answer.

Autosquash

For my usage, I just need an editor that completes successfully without modifying the input. Luckily I have one of those, a bunch really, but lets go with the simplest:true. Yep, this will run an autosquash interactive rebase without showing me the pick window, where $COMMIT_SHA is the reference for the rebase.
GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash $COMMIT_SHA

By defining the environment variable at the start of the command, it is only stored in the environment for that command.

I've now stored this as a git alias to test out. I'll let you know how it goes.
git config --global alias.fixup '!GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash'

Examples

git fixup master
rebase the current branch to HEAD of master and autosquash the commits.

rust: to_string() vs to_owned() for string literals

Always use to_owned() to convert a string literal.

I found this lovely explanation of to_string() vs to_owned() for rust. Only use to_string() for other types that can convert to string.

You should always be using to_owned(). to_string() is the generic conversion to a String from any type implementing the ToString trait. It uses the formatting functions and therefor might end up doing multiple allocations and running much more code than a simple to_owned() which just allocates a buffer and copies the literal into the buffer.
-- https://users.rust-lang.org/t/to-string-vs-to-owned-for-string-literals/1441

With the caveat that this may be fixed in the future to optimize to_string() on String literals.

This may be fixed in the future with specialization, as str could implement ToString directly instead of having it go through the generic impl ToString for T where T: Display {} implementation, which employs the formatting framework. But currently I do concur with your recommendation.
-- DroidLogician

Sunday, January 24, 2016

SCALE14x

I'm presenting at SCALE 14x on Sunday January 24, 2016.

Fix the Website: a devops success story (details)

Here are my slides!

  1. Original Keynote file
  2. PDF
  3. pdf with presenter info

Wednesday, July 8, 2015

Effective Git messages and history inspection

Embedded below is my presentation from YAPC.na 2015 on Effective Git: better commits via inspecting history and code archeology.

I showed the elements of an effective commit message, why they're useful during inspection of the code, and how to coerce your rough draft feature branch into a production ready artifact.

The slides in the video are washed out, so follow along with the Slides (pdf)

From the talk description:

Harness the power of Version Control to view a project’s evolution over time. We have the luxury of moving forward and backwards through the history of our projects, viewing changes through time and reading sign posts along the journey. Experience reading commit messages will prove how useful they are at sharing the mental model behind the code. Reading historical commit messages and viewing diffs improves our ability to document and stage our own commits. Commits are not write-only! They are messages from the past that tell us about our present.

I’ll show you the tools I use for diving into a new code base and how I interact with my current projects on a daily basis. I’ll show how I answer the questions that come up when reading and debugging code. I’ll show you how I stage and rebase my commits to make a readable history. You’re keystrokes away from pivoting from code to annotation to arbitrary diffs then cross-corelate commit messages with your ticketing system.

Wednesday, April 22, 2015

Renew expiring GeoTrust HTTPS/SSL certificate in Amazon AWS for S3 and CloudFront

Key Insight

AWS doesn't let you modify the key for server-credentials, forcing you to create new ones and then update CloudFront(CF) and Elastic Load Balancer(ELB) configurations to use the new cert.

My corporate https/ssl certificate is expiring. I need to renew it and get it pushed to AWS IAM for use in S3 and CloudFront. If you're in the same boat, I hope these instructions help you out.

PS. Hi Future me, I'll see you in about a year when this round of certs expires.

Materials Needed:

  1. CSR and private key file.
    1. The current set is preferred.
    2. If you don't have the original files, you can create a new pair.
    3. If you are changing the CSR, your certificate authority may need to spend time re-validating you.
  2. account & password to your certificate authority.
  3. aws credentials and access to modify IAM certificates
  4. aws command line tools installed.

Basic Steps:

  1. Renew the certificate:
    1. Connect to certificate authority.  For me this is GeoTrust
    2. Click the big [renew]  button by your current certificate.  
      1. pick the new certificate term,  
      2. confirm admin and billing contacts
      3. update the CSR for confirmation
      4. pay.
      5. wait for confirmation
  2. Download and prep the certificate files:
    1. Download the certificate bundle.  Choose type "other" which will provide a zipped bundle of files. Unzip and enter the directory.
    2. crossRootCA.cer
      getting_started.txt
      IntermediateCA.cer
      ssl_certificate.cer
    3. Create a certificate bundle from the root and intermediate file:
    4. cat IntermediateCA.cer crossRootCA.cer > geotrust-chain.pem
    5. Copy the original secure key to the local dir.  For me this is company.rsa.key.  This must be a RSA key in x509 format.
    6. cp secret_files/company.rsa.key ./
  3. Create a new AWS IAM server-certificate.
    1. AWS doesn't support modifying the keyfile in existing server-certificates, we need to create new ones.
    2. CloudFront requires a separate server-certificate with a path starting with 'cloudfront/', so we'll upload the key twice to create two server-c
    3. aws iam upload-server-certificate \
      --server-certificate-name company-test \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /
      aws iam upload-server-certificate \
      --server-certificate-name company-test-cf \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /cloudfront/
  4. Update AWS to use the new server-certificates
    1. Cloudfront:
      1. For each CloudFront distribution using the expiring server-certificate: 
        1. In the console: Console -> CloudFront -> Distribution Name -> [General] -> [Edit] 
        2. Then choose the new certificate from the drop-down.
    2. ELB:
      1. Console -> EC2 -> (pick region) -> Load Balancers
      2. For each load balancer that uses HTTPS with the old cert:
        1. right-click -> 'edit listeners'
        2. Use the "change" link in the SSL Certificate column.
          1. Certificate Type: Choose an existing certificate
          2. Certificate Name: choose new certiicate from the drop-down
Today I learned about and used the aws iam *-server-certificate* commands. Next steps would be bypassing the console and automating detection and updates of ELB and CF entries.

Links

Sunday, February 8, 2015

haskell on centos 6.5

Use justhub rather than version in epel repo.

Don't bother with the version of haskell-platform in epel repo. It is sufficiently out-of-date (circa 2010) that it can't update via cabal install cabal-install. Jump straight to using justhub.

Justhub example for centos 6.x:

# install the justhub yum repo:
sudo rpm -ivh http://sherkin.justhub.org/el6/RPMS/x86_64/justhub-release-2.0-4.0.el6.x86_64.rpm

# install single current haskell version into /usr/bin
yum install haskell

# update cabal
cabal update

# e.g. install some packages via cabal
cabal install haskell-src-exts
Now I can get back to coding for exercism.io. Come review my first haskell program.