Friday, May 1, 2020

Rust playground from your own gist

The rust playground is awesome.

It allows playing with rust in the browser, without needing to install anything locally. These playgrounds / playpens are popular with newer languages. I've definitely seen with rust, go, kotlin, and others. Javascript ones allow script and css and other elements.

When editing a playground, the file can be modified, compiled, and run. The file can be exported as a direct link with the code (if it is short enough to fit in a get param). The code is also stored as a github gist under the "rust-play" user, and export links are provided to view the gist and to load a permalink to the playpen with that gist.

I haven't seen this documented anywhere, but we can use our own gist in the link!

Permalink looks like this:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=23727a722bff54fd20f44ef43b96466b

version, mode and edition options configure the settings for the playground.
The gist option specified the gist to load.

Advantages to using your own gist:
  • Ownership and control
  • Notification of comments
  • Tracking of forks
  • Updates of gist are reflected in the playground
Disadvantages:
  • Edits made in the playground are lost.  The playground doesn't have update access to the gist.
  • Unexpected experience for consumers?

Try out my Playground link for gist 23727a722bff54fd20f44ef43b96466b

Embedding a single file from a github gist

Embedding a github gist


Github has improved their gists by directly providing an embed script.

This is pretty handy. From the sharing dropdown on the gist, choose embed and then copy the script tag. Paste that directly into your html (say on your super minimal blogging platform, like blogger/blogspot).

The script will embed the gist with syntax highlighting and links to view the raw gist.

Format of the script tag:
<script src="https://gist.github.com/${username}/${gist_id}.js"></script>

username is optional and can be skipped:
<script src="https://gist.github.com/${gist_id}.js"></script>


Embedding a single file from a gist


When our gist contains multiple files, the default embed will include all of the files. (Sorted by filename, just like in the gist). You can embed just a single file by adding the filename to a file get param.

Format of the script tag:
<script src="https://gist.github.com/${username}/${gist_id}.js?file=${filename}"></script>

The username path can be skipped:

<script src="https://gist.github.com/${gist_id}.js?file=${filename}"></script>

Embedded file example


Embedding from file is_valid_sequence.rs from my gist 23727a722bff54fd20f44ef43b96466b

<script src="https://gist.github.com/23727a722bff54fd20f44ef43b96466b.js?file=is_valid_sequence.rs"></script> 

toy interiew problems in rust

2019 was my "Year of Rust".  I'd hoped to blog about that, yet here we are. :)
 
 Recently I've been doing random problems at random "coding-interview" sites.  This is a bit like working musical scales. Practice in flexing the low level muscle memory to allow it to fade into the background and allow thinking at a higher level.

  I mostly do them in python to work out the kinks in my algorithm (seriously, they are all about fixating on annoying edge cases.)  And then a few I work out in rust.

Problem


Today's challenge: Check If a String Is a Valid Sequence from Root to Leaves Path in a Binary Tree on leetcode.com

Check If a String Is a Valid Sequence from Root to Leaves Path in a Binary Tree

Given a binary tree where each path going from the root to any leaf form a valid sequence, check if a given string is a valid sequence in such binary tree.
We get the given string from the concatenation of an array of integers arr and the concatenation of all values of the nodes along a path results in a sequence in the given binary tree.



Solution



A few things to note about my solution:
  • The recursion is pretty simple and makes lovely use of the match syntax
  • rust match syntax doesn't allow matching a slice into (head, tail) as one might be used to in other pattern match languages or a lisp where all arrays are built up from recursively defined two element lists, e.g. (a, (b, (c, (d, nil))))
  • During development, I worked out all the edge case logic with the recursive calls commented out and short circuited. After getting that all working, I dug into the type issues with my intial naive attempt at executing the recursion.
  • extracting a value from a Option<RC<RefCell<T>>> requires jumping through a couple of hoops.
  • My internal recursive function has a slightly different signature than the public function. In my python solution, I was able to use the primary interface in my recursion code. Passing literal vecs as borrowed from the RefCell was just never going to work.

tests

Now let's add some tests!
The leetcode interface handles converting tests from a text form, populating the tree and then running the algorithm. Details of their population routine are not available. Adding tests manually is ... annoying. Which makes it a good practice exercise for building RC

Rust Playpen

Full code available to edit and play with on the rust playpen

Tuesday, November 5, 2019

Update tmux default directory for new windows

TIL:
  • the -c flag to attach-session will update the default current working directory for new windows.
  • attach-session can be run from the tmux command prompt with an arg of -t . to connect to the current session,
    so we don't need to detach and reattach.
  • attach also supports -c
  • tmux makes the current pane's working directory available in the command prompt as '#{pane_current_path}'
  • new-window also supports the -c option.
Set the default CWD for new windows to the current directory:

From the tmux command prompt:

:attach -c '#{pane_current_path}'

# or the more verbose attach-session version,
# listed because I thought of this first :)
:attach-session -t . -c '#{pane_current_path}'


Bound to a char in .tmux.conf:

# update default path for new windows to the current path 
bind-key 'M-/' :attach -c '#{pane_current_path}'

# Open a new window using the current working directory,
bind-key 'C' new-window -c '#{pane_current_path}'

For reference: see this StackOverflow comment and this Unix StackExchange answer.

Saturday, February 9, 2019

Hacktoberfest 2018

For Hacktoberfest 2018, I set a goal of completing both the primary contest and at least one side quest. I succeeded!

Hacktoberfest is a fun "contest" sponsored by DigitalOcean and Github each October to encourage open-source contributions. If you make enough pull-requests after signing up, then you win a tshirt and some stickers. This year the primary goal was 5 pull-requests on any public github repositories. I recommend it as a fun way to get in the practice working on open source software and pushing pull-requests. I'll remind you to sign up next October.

As my side quest, I completed the microsoft challenge -- one pull request on any of their repos plus filling out a form. Today I found their email in my inbox with the redemption code, it arrived almost two months ago. So now my shirt is on the way (albeit not in my preferred size, as XL was the only one out of stock)! Similarly it took a month for me to notice the confirmation for the primary contest and confirm my shirt. Note to self: clean up your email!

Thanks to Kivanc, we even had an office hacktoberfest hack-night.

See my PRs from October, 2018.

Hi Blog!

It's been a dark few years here on the "darkest timeline." I'm going to let my light shine -- I'm hopeful that I'll write some fun tech content in the near future detailing my current adventures.

I have a full docket of books and videos I'm reading and watching on safari. Triggers thoughts about how to amalgamate the knowledge into a lecture/presentation. Assuming I keep up this newly recovered energy and excitement, you'll see more here soon.

peace!
AG

Friday, September 16, 2016

But these little setbacks are sometimes just what we need to take a giant step forward, Right Kent?

Wednesday, May 25, 2016

maintaining SSH_AUTH_SOCK in tmux

Tmux has a neat feature where certain environment variables can be updated during attachment.

SSH_AUTH_SOCK is included by default. When you reconnect from within a new ssh environment, SSH_AUTH_SOCK is updated inside of tmux. Any new windows created in the tmux session will have the updated ssh information.

Already running shells are not updated (how would tmux tell the shell to update?). I've added a zsh alias to my workflow to pull in the updated value from a running shell. It wraps the show-environment tmux command. This is a shell alias because it needs to affect the running shell.

fix_ssh () {
        eval $(tmux show-environment | grep ^SSH_AUTH_SOCK)
}

I don't normally start many interactive ssh session from my remote box, but I do need to talk to my local SSH agent to connect to my private git repos. I hate running an update and seeing Permission denied. Before I added this fix_ssh command, I caught myself opening new tmux windows to run the fetch from and then closing them to return to my active shell -- an expensive and distracting work-around.

% git fetch
Permission denied (publickey).
fatal: Could not read from remote repository.
% fix_ssh
% git fetch
# success!

When I run with screen, I set the SSH_AUTH_SOCK value to a steady value before opening the initial screen session and then manually update the sock on each login. When I remember.

# for screen
MY_SSH_AUTH_SOCK=$HOME/.ssh/auth_sock
rm $MY_SSH_AUTH_SOCK && ln -s $SSH_AUTH_SOCK $MY_SSH_AUTH_SOCK && export SSH_AUTH_SOCK=$MY_SSH_AUTH_SOCK

Tuesday, February 9, 2016

Interview with an adware author.
It was funny. It really showed me the power of gradualism. It’s hard to get people to do something bad all in one big jump, but if you can cut it up into small enough pieces, you can get people to do almost anything.

http://philosecurity.org/2009/01/12/interview-with-an-adware-author

Sunday, February 7, 2016

how-to: run git interactive rebase non-interactively

TL;DR

git alias to autosquash fixup commits non-interactivly:
git config --global alias.fixup '!GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash'

non-interactive interactive rebase

In the normal workflow, git interactive rebase presents the user with a document to edit interactively to modify and reorder commits. What if you want to run it non-interactively?

Yes, you can do this. And yes I have a use-case!

I'd like to apply/squash all my --fixup commits automatically without spending time in that editor screen. This is an easy use case because I don't change anything in the editor window, that's handled by the --autosquash flag to rebase.

-i
--interactive
Make a list of the commits which are about to be rebased. Let the user edit that list before rebasing. This mode can also be used to split commits (see SPLITTING COMMITS below).

The commit list format can be changed by setting the configuration option rebase.instructionFormat. A customized instruction format will automatically have the long commit hash prepended to the format.
-- git rebase documentation

Git respects the traditional unix environment variables $EDITOR and $VISUAL. Overriding one of those will change the editor that is run during interactive rebase but also changes the editor used while in the rebase to change commit messages and etc.

A third environment variable was added by Peter Oberndorfer: $GIT_SEQUENCE_EDITOR. This editor is only used for the interactive rebase edit. As an aside, this is a wonderful commit message.

"rebase -i": support special-purpose editor to edit insn sheet

The insn sheet used by "rebase -i" is designed to be easily editable by any text editor, but an editor that is specifically meant for it (but is otherwise unsuitable for editing regular text files) could be useful by allowing drag & drop reordering in a GUI environment, for example.

The GIT_SEQUENCE_EDITOR environment variable and/or the sequence.editor configuration variable can be used to specify such an editor, while allowing the usual editor to be used to edit commit log messages. As usual, the environment variable takes precedence over the configuration variable.

It is envisioned that other "sequencer" based tools will use the same mechanism.

Signed-off-by: Peter Oberndorfer
Signed-off-by: Junio C Hamano

-- http://git.kernel.org/cgit/git/git.git/commit/?id=821881d88d3012a64a52ece9a8c2571ca00c35cd

Did I just know all this? No, not really. I hadn't heard of GIT_SEQUENCE_EDITOR until reading the code for the silly little git --blame-someone-else script going around. That gave me the keyword to search to find this excellent Stack Overflow answer.

Autosquash

For my usage, I just need an editor that completes successfully without modifying the input. Luckily I have one of those, a bunch really, but lets go with the simplest:true. Yep, this will run an autosquash interactive rebase without showing me the pick window, where $COMMIT_SHA is the reference for the rebase.
GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash $COMMIT_SHA

By defining the environment variable at the start of the command, it is only stored in the environment for that command.

I've now stored this as a git alias to test out. I'll let you know how it goes.
git config --global alias.fixup '!GIT_SEQUENCE_EDITOR=true git rebase -i --autosquash'

Examples

git fixup master
rebase the current branch to HEAD of master and autosquash the commits.

rust: to_string() vs to_owned() for string literals

Always use to_owned() to convert a string literal.

I found this lovely explanation of to_string() vs to_owned() for rust. Only use to_string() for other types that can convert to string.

You should always be using to_owned(). to_string() is the generic conversion to a String from any type implementing the ToString trait. It uses the formatting functions and therefor might end up doing multiple allocations and running much more code than a simple to_owned() which just allocates a buffer and copies the literal into the buffer.
-- https://users.rust-lang.org/t/to-string-vs-to-owned-for-string-literals/1441

With the caveat that this may be fixed in the future to optimize to_string() on String literals.

This may be fixed in the future with specialization, as str could implement ToString directly instead of having it go through the generic impl ToString for T where T: Display {} implementation, which employs the formatting framework. But currently I do concur with your recommendation.
-- DroidLogician

Sunday, January 24, 2016

SCALE14x

I'm presenting at SCALE 14x on Sunday January 24, 2016.

Fix the Website: a devops success story (details)

Here are my slides!

  1. Original Keynote file
  2. PDF
  3. pdf with presenter info

Wednesday, July 8, 2015

Effective Git messages and history inspection

Embedded below is my presentation from YAPC.na 2015 on Effective Git: better commits via inspecting history and code archeology.

I showed the elements of an effective commit message, why they're useful during inspection of the code, and how to coerce your rough draft feature branch into a production ready artifact.

The slides in the video are washed out, so follow along with the Slides (pdf)

From the talk description:

Harness the power of Version Control to view a project’s evolution over time. We have the luxury of moving forward and backwards through the history of our projects, viewing changes through time and reading sign posts along the journey. Experience reading commit messages will prove how useful they are at sharing the mental model behind the code. Reading historical commit messages and viewing diffs improves our ability to document and stage our own commits. Commits are not write-only! They are messages from the past that tell us about our present.

I’ll show you the tools I use for diving into a new code base and how I interact with my current projects on a daily basis. I’ll show how I answer the questions that come up when reading and debugging code. I’ll show you how I stage and rebase my commits to make a readable history. You’re keystrokes away from pivoting from code to annotation to arbitrary diffs then cross-corelate commit messages with your ticketing system.

Wednesday, April 22, 2015

Renew expiring GeoTrust HTTPS/SSL certificate in Amazon AWS for S3 and CloudFront

Key Insight

AWS doesn't let you modify the key for server-credentials, forcing you to create new ones and then update CloudFront(CF) and Elastic Load Balancer(ELB) configurations to use the new cert.

My corporate https/ssl certificate is expiring. I need to renew it and get it pushed to AWS IAM for use in S3 and CloudFront. If you're in the same boat, I hope these instructions help you out.

PS. Hi Future me, I'll see you in about a year when this round of certs expires.

Materials Needed:

  1. CSR and private key file.
    1. The current set is preferred.
    2. If you don't have the original files, you can create a new pair.
    3. If you are changing the CSR, your certificate authority may need to spend time re-validating you.
  2. account & password to your certificate authority.
  3. aws credentials and access to modify IAM certificates
  4. aws command line tools installed.

Basic Steps:

  1. Renew the certificate:
    1. Connect to certificate authority.  For me this is GeoTrust
    2. Click the big [renew]  button by your current certificate.  
      1. pick the new certificate term,  
      2. confirm admin and billing contacts
      3. update the CSR for confirmation
      4. pay.
      5. wait for confirmation
  2. Download and prep the certificate files:
    1. Download the certificate bundle.  Choose type "other" which will provide a zipped bundle of files. Unzip and enter the directory.
    2. crossRootCA.cer
      getting_started.txt
      IntermediateCA.cer
      ssl_certificate.cer
    3. Create a certificate bundle from the root and intermediate file:
    4. cat IntermediateCA.cer crossRootCA.cer > geotrust-chain.pem
    5. Copy the original secure key to the local dir.  For me this is company.rsa.key.  This must be a RSA key in x509 format.
    6. cp secret_files/company.rsa.key ./
  3. Create a new AWS IAM server-certificate.
    1. AWS doesn't support modifying the keyfile in existing server-certificates, we need to create new ones.
    2. CloudFront requires a separate server-certificate with a path starting with 'cloudfront/', so we'll upload the key twice to create two server-c
    3. aws iam upload-server-certificate \
      --server-certificate-name company-test \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /
      aws iam upload-server-certificate \
      --server-certificate-name company-test-cf \
      --certificate-body file://ssl_certificate.cer \
      --private-key file://company.rsa.key \
      --certificate-chain file://geotrust-chain.pem \
      --path /cloudfront/
  4. Update AWS to use the new server-certificates
    1. Cloudfront:
      1. For each CloudFront distribution using the expiring server-certificate: 
        1. In the console: Console -> CloudFront -> Distribution Name -> [General] -> [Edit] 
        2. Then choose the new certificate from the drop-down.
    2. ELB:
      1. Console -> EC2 -> (pick region) -> Load Balancers
      2. For each load balancer that uses HTTPS with the old cert:
        1. right-click -> 'edit listeners'
        2. Use the "change" link in the SSL Certificate column.
          1. Certificate Type: Choose an existing certificate
          2. Certificate Name: choose new certiicate from the drop-down
Today I learned about and used the aws iam *-server-certificate* commands. Next steps would be bypassing the console and automating detection and updates of ELB and CF entries.

Links

Sunday, February 8, 2015

haskell on centos 6.5

Use justhub rather than version in epel repo.

Don't bother with the version of haskell-platform in epel repo. It is sufficiently out-of-date (circa 2010) that it can't update via cabal install cabal-install. Jump straight to using justhub.

Justhub example for centos 6.x:

# install the justhub yum repo:
sudo rpm -ivh http://sherkin.justhub.org/el6/RPMS/x86_64/justhub-release-2.0-4.0.el6.x86_64.rpm

# install single current haskell version into /usr/bin
yum install haskell

# update cabal
cabal update

# e.g. install some packages via cabal
cabal install haskell-src-exts
Now I can get back to coding for exercism.io. Come review my first haskell program.

Thursday, July 3, 2014

Monitorama Conference

I attended the second Monitorama Conference last month in Portland and the first last year in Boston. It’s been a privilege and joy to watch the journey as the conference coalesced from twitter gripes to discussions to international happening.Andrew
“An Open Source Monitoring Conference & Hackathon”, Monitorama is focused on embracing open source and improving monitoring to improve the lives of folks in development(devs) and Operations(ops). Monitors are the tools we use to watch over our computers (and websites) and make sure they are running as expected. Monitorama is quickly becoming one of my favorite conferences, alongside Scale, OSCON and YAPC. These all share a theme of grass roots Open Source development and organization. I love the Open Source tenets of sharing, improvement and experimentation.
Twitter griping about monitoring lead to the twitter hashtag #monitoringsucks. Venting lead to discussions lead to the realization that the strong emotional response was driven by a need for better tools — we hate monitoring because it is both important and hard to do well. Everyone complains about nagios, but it’s been the market leader for 10+ years because it works. The tools, and thus monitoring, would be better if we gave them some love. So the conversation migrated to#monitoringLove and discussions of how to make things better. This period saw tools like graphite and statsd emerge into popularity. Possibly as a joke, Jason Dixon floated the idea of a conference and then willed it into awesome existence.
Why Me?
Because you care about the tools that you work with. You’re an artisan within your team and want to help improve the work environment for you and your peers. We’ve all heard that monitoring sucks, but you want to do something about it.
(From Monitorama I, 2013)
monitoring love
#MonitoringLove fit nicely with ideas behind the DevOps movement: improve the dev and ops communities, get them to work together, and get ops to write and share code. As Carlo (@lolcatstevens) of DevOpsLA said during his SCaLE talk, “Ops, you want Dev respect? Ship some code!”(paraphrased). The groundswell of support for tools (and the operators of those tools) was unexpected and encouraging, also spawning #hugops conversations reminding us to thank our ops folks for their tireless struggle to keep everything running. P.S. Hey DM Ops, thanks for everything!!!
The first monitorama was two days, with the second day devoted to a hackathon. A hundred people hunkered down to listen to talks, bond, converse and write code. It was fun (and intimidating) to have so many project authors attending, reminding us that they’re normal people albeit ones who share their talent and passion. Graphite real-time graphs were a big theme, as were nagios replacements Riemann and Sensu, and log tools like logstashkibana and elastic search.
A year later, we could see how much new ground was covered. Talks assumed you’d already started using logstash and elastic search and graphite and tried a bevy of graphite-front-end replacements. Our organizer, Jason Dixon, did a fabulous job of maintaining the small conference energy and passion even as we increased to ~300 people and expanded to 3 days. We kept a single-track approach, meaning you could see everything and feel included. Inclusion, cooperation and encouragement were all specifically emphasized. The hackathon was less pronounced, merging with tutorials on the third day. After each tutorial, I was struck with project ideas and possible contributions. I really enjoyed the tutorial on flapjack and I jumped in to fixing some tickets and install issues, reminding myself how to use ruby along the way. Hands-on fiddling was encouraged through all the conference, reminding us not to be hidden observers.
I collected the slide decks for most all of the presentations this year and collated them into a post for you. Most of them are also available as video uploaded to Vimeo. The audio and video quality are better than I expected, pretty awesome actually.
The Grafana tutorial (video) was particularly well received. Torkel Ödegaard flew in from Sweden to show his new project, built from the core of the excellent Kibana (elastic search) project. Grafana is an open source metrics dashboard and graph editor for Graphite and InfluxDB — use it to build beautiful graphs for graphite data. And there is a live demo to play with while you watch the tutorial. To paraphrase his intro: “I used graphite! I loved it! None of my teammates wanted to make graphs in the terrible UI, sadness. I used Kibana, I F’in’ love kibana. The Graphite UI is terrible. the Kibana UI is awesome. So I started hacking.” And to paraphrase the audience “HOLY CRAP! I WANT THAT NOW! HOW CAN WE GET YOU TO WORK ON IT FULL-TIME?! LOVE!” And there was much rejoicing #hugops! Torkel also plays a fine game of table tennis, we met in the first round of the Ping Pong tourney, battle of the ‘gaaaaards. Spoiler: I made it to the semi-finals.

Meeting new peers at a conference is a wonderful boost of energy and drive. I highly recommend it, even if it’s a bit far afield of what you work on, the new skills will help you see new solutions to your current problems.
Top hits I’d recommend from the conference:

  • All the tutorials: GrafanaDashingFlapjackKibana and InfluxDB.
  • 17th century shipbuilding and your failed software project – hilarious lightning talk of the “WAT” variety — warning, some “adult” language.
  • Keynote by Adrian “netflix guy” Cockroft.
  • Computers are a Sadness, I am the cure, insightfully funny look into software and ops by the incomparable James Mickens. Calorie free, but entertaining. Funnest and funniest talk you’ll see at a tech talk this year.
  • Cost and Complexity of Reactive Monitoring Wonderful talk on how and why to monitor. Baker is a nice fella, I’m happy to have made friends with him.
  • Lifecycle of an outage, Scott Sanders’ talk about how Github handles outages. Great look at their internal workflow and tools during emergencies.
  • Car Alarms vs Smoke Alarms a talk about Sensitivity and Specificity as imported from medical probability conversations — how to calculate the positive predictive value of a test. A usefuldiagram to view while watching.
  • Find your favorite by browsing All Videos.
  • @Fun_Cuddles Audit All The Things talk showed some seriously hard core talk on security logging, including some sweet hooks to use the linux audit system. “We found no evidence that any customer data was accessed, changed or lost,” generally means “We have no idea!”. Jen is awesome!
  • pretty much ALL OF THE VIDEOS!
Thanks for reading. Please let me know if you watch and enjoy any of these talks, I’d love to discuss them with you.

Wednesday, June 11, 2014

Test-Driven Development with Python.

Harry Percival (@hjwp / obeythetestinggoat on gmail) has written a new book on TDD with python: "Test-Driven Development with Python." An early release of the book is available for free reading on chimera.oreilly.com.

Last week he led a webcast, "Outside-in TDD and Unit Test Isolation with Python, Django and Selenium." It was almost 2 hours, lots of Good stuff. He started by explaining traditional (inside-out) TDD and then contrasted to outside-in, all in the context of a webapp.

O'Reilly dropped a 50% discount code during the webcast, not sure how long it will last: "WCYAZ".

I "watched" the webcast live, but was on my phone which only provided the audio stream and not the slides. I'm looking forward to watching the archive and reading the book.

LINKS:

Monday, May 12, 2014

Monitorama Slides 2014

Videos will be posted in the monitorama channel on vimeo: http://vimeo.com/monitorama.

Until then, enjoy this collection of slidedecks and twitter handles.

Day 1:

Please, no More Minutes, Milliseconds, Monoliths... or Monitoring Tools!

Computers are a Sadness, I am the Cure

  • James Mickens
  • No slides posted. No twitter handle.
  • lots of photos.
  • "Say 'Word Count' one more time"

Simple math to get some signal out of your noisy sea of data

The Care and Feeding of Monitoring

Car Alarms and Smoke Alarms

Metrics 2.0

Our Most Wicked Problem

StatsG at New York Times

The cost and complexity of reactive monitoring

  • Chris Baker
  • @datumrich
  • slides: none yet

From Zero To Visibility

Day 2:

"Auditing all the things": The future of smarter monitoring and detection

Is There An Echo In Here?: Applying Audio DSP algorithms to monitoring

A Melange of Methods for Manipulating Monitored Data

The Final Crontab

This One Weird Time-Series Math Trick

The Lifecycle of an Outage

A whirlwind tour of Etsy's monitoring stack

Wiff: The Wayfair Network Sniffer

Web performance observability

Day 2: Lightning Talks

ServerSpec and Sensu

Monitoring for Distributed Operational Responsibility

Postgres Performance Monitoring

  • Larry Price
  • @laprice

Accidentally catching a hacker with monitoring

  • Xiao Yu
  • @HypertextRanch
  • "We need to teach developers exactly enough stats and math to solve their biggest problems."

Chess - a reflection of life

  • Narenda Vikram D
  • @contactdnv

17th Century Shipbuilding and Your Failed Software Project

Day 3 – Hacking and Tutorials

Kibana Workshop

Flapjack Workshop

Dashing Workshop

InfluxDB Workshop

Grafana Workshop

Wrap-up notes and blog posts:

Friday, November 15, 2013

Kinesis Advantage: mapping the Macintosh Power key

TL;DR

Press = and Scroll Lock together while in a pc master mode to make Scroll Lock the Macintosh Power Key.

Motivation

I normally use non-windows pc mode (=p) for my kinesis. Now that I'm on a mac I need a Command key, so I switched to windows pc mode. Windows pc mode only changes one thumb key relative to non-pc mode, the right alt becomes a Command/Windows key. Mac mode remaps all the alt and control locations and produces two Command/Windows keys.

I rarely need the power button, so I hadn't bothered to figure this out. But now I want to be able to suspend/power my laptop without opening it and waiting on graphic layout as the system switches to multi-monitor mode.

How-to

It's a simple matter to pull the Power Key binding from mac (=m) mode into any of the other modes. Press = "Scroll Lock" to copy the binding from default into current. The tricky part was finding the original binding.

  • Kinesis advantage supports three master settings: macintosh (=m), non-windows pc(=p) and windows pc(=w).
  • macintosh mode (= m) is the default mode.
  • macintosh mode maps Scroll Lock to Power Key
  • Any key that is mapped by a master setting can be individually remapped using the = key in the number row (top left, above Tab).
Windows PC layout:

Kinesis USB Advantage manual

Saturday, June 8, 2013

Hack day with Kenny: Fey::ORM, testing and screen. [lost draft from 1/12/10]

After sleeping through the LILAX users group meeting (sorry guys), I rolled up to Kenny's (Kenny Flegal), where he had invited me for a day of coding and authentic Salvadorian food. Win Win!

I showed him briefly the topic of my upcoming Monger's presentation, but mostly we looked at his current project. He is forking a GPL licensede project, to recreate part of the functionality and extend it in a different direction. Along the way he's rewriting the app layer in perl from command line php scripts.

We discussed the various clauses of the Gnu Affero GPL with regards to the hosting of the project during the initial revs. Can he have a public repository before he has finished changing all references to the old name to a new name and adding "prominent notices stating that you modified it, and giving a relevant date" as per Section 5, paragraph a? We decided that he probably could, but that it'd be easier to start with a private repo and not publish until that part is done. That seems sub-optimal from a "getting the source to the people" mindset, but it is more optimal in the "protect the good name of the original project and publishers."

Along with switching from php to perl, he's pulling out the hard coded sql from the scripts and moving to an ORM. He's picked Dave Rolsky's impressive Fey ORM. This project has a ridiculously complex set of schemas, with inconsistent table names and not explicit foreign key constraints. As such, it is extra work to get the fey schema situated.

Kenny started to give me a run through of some of the code, but it was awkward with both of us on laptops to see the code conveniently. I made him stop and set up a screen session for sharing, as described in my previous post on screen. This was more difficult than I expected, with the problem eventually being that ubuntu 9.4 and beyond has moved /usr/bin/screen to /usr/bin/screen.real and made screen a shell wrapper. The screen multiuser ACL system requires that the screen binary be setuid (chmod +s). With this setup we needed to make screen.real setuid. That took a while to notice.

Once we had a shared session open, it was much easier for him to give me a guided tour of the codebase and database/sql setup. Once that was clear it was time to get some code started. He showed me some of the Fey::ORM model code and how he was migrating over the individual sql statements to the ORM. He had been plugging away on the model code for a while, starting by creating a comment for every line of sql in the application including the file and line of the caller.

The next step was clear, we needed some tests. We set to work getting an initial test of the model code. First we installed Fey::ORM::Mock as a mock layer. This works at a higher level than a standard DBD::Mock interface to allow better testing of the Fey::ORM features. The test didn't pass at first due to missing data in the mock object, so we grabbed a list of the fields that mapped to DB fields and started adding values to pass constraint failures on the data. Once we had a minimal set of data then we started to see problems with the ORM schema description. The lack of well defined foreign key constraints meant we needed to explicitly define that structure for the ORM. More boilerplate code into the model. We repeated this test-update-repeat cycle a few more times adding more data linkage descriptions.

I took a brief break from our pairing and jumped to a different screen to install some goodies. I grabbed a copy of the configuration files from the December la.pm.org talk and started updating his config. He didn't have a .vimrc, .vim or .perltidyrc on this brand new dev box, so I pulled those in from the repo. I showed him how much time using ":make" in vim could slice off his build/test cycle, and he was super excited. (ok, not till the third or fourth try but he eventually got the hang of it).

To get around some issues in code placement, I modified the .vimrc and .vim/ftplugin/compiler code to add -MFindBin::libs to the calls to perl -c and prove. This allowed the parent libs/ directory to be found for these non-installed modules. This is a bit of a hack and I'll get it removed as we move closer to an initial release and pick a packaging tool, possibly Dist::Zilla.

An open question is the speed of Fey::ORM. It takes a big startup hit while building the models from the schema and interacting with the database. This is supposed to lead to a big speed gain during runtime from aggressive caching of that information. All I know for certain is that the compile-run-test cycle was really slow. This is my first time using Fey so I don't know how this plays out normally. It could just be that the number of crosslinked tables in the db config were causing additional slowdowns.

By this point we had already had two delicious meals of El Salvador cuisine and it was approaching midnight. The first meal was home cooked fried (skinless) chicken for lunch and the second was papoosas at a local, excellent place in Van Nuys. I was all coded out, which made for a perfect transition to the party at Andy Bandit's that night, conveniently just 6 miles from Kennys.

All in all, a fine Saturday.