Log commands to Google Cloud Stackdriver Logs

December 15, 2017   

Google Cloud Platform (GCP) has a service called Stackdriver logging which provides a nice interface for accessing logs.

Stackdriver logging is integrated with all GCP services but it can also be extended. Users can create custom logs and access them centrally using the web-based interface or the Google Cloud SDK.

This got me wondering whether there was a way to log terminal commands locally or on a server. It is possible by setting the PROMPT_COMMAND variable in BASH. After a command is submitted the value of PROMPT_COMMAND is interpretted (technically it is interpretted before the next prompt is printed to the screen).

I wrote up a quick function that looks to see whether the last command exited successfully (0) or resulted in an error (>0), and log using INFO or ERROR respectively. Then I set the function to the PROMPT_COMMAND variable. Note that you may need to activate the gcloud beta logging for this to work.

function prompt {
if [[ $? -eq 0 ]];then
    (gcloud beta logging write bash_log "`fc -nl -1`" --severity=INFO > /dev/null 2>&1 &)
    (gcloud beta logging write bash_log "`fc -nl -1`" --severity=ERROR > /dev/null 2>&1 &)


Now check the logging interface and you will see your commands are logged!


Python Command-line skeleton

February 2, 2017   

Writing a command-line interface (CLI) is an easy way to extend the functionality and ease of use of any code you write.

Python comes with the built-in module, argparse, that can be used to easily develop command-line interfaces. To speed up the process, I have developed a ‘skeleton’ application that can be forked on github and used to quickly develop CLI programs in python.

The repo has the following features added:

  • Testing with travis-ci and py.test
  • Coverage analysis using coveralls
  • A setup file that will install the command
  • a simple argparse interface

To get started, you should signup for an account on travis-ci and coveralls, and fork the repo!

repo python-cli-skeleton on Github


Introducing a Chicago Bioinformatics Slack Channel

January 31, 2017   

Today I am introducing a new slack team for bioinformatians in Chicago.

Signup for the Chicago Bioinformatics Slack Channel!

Currently anyone with an email at the following domains can signup:

  • @northwestern.edu
  • @uchicago.edu
  • @uic.edu
  • @depaul.edu
  • @luc.edu
  • @iit.edu

Members can invite anyone. I am happy to add any Chicago-area domains. Please let me know which ones I am missing!

The slack team features channels for bioinformatics-help, general, introductions, meetups, and random currently. We can add more channels!

Alfred Image Utilities

January 15, 2017   

A workflow for making quick changes to image files. Alfred-image-utilities grabs any selected images in the frontmost finder window and can apply changes to them. Most of the time a copy of the image is made and its extension is changed to <filename>.orig.<ext>. You can replace the original file by holding command when executing most commands.


Main Menu


Convert to png or jpg

You can convert from a large number of formats to these jpg or png. The original file is retained unless you hold command.


Scale images by a maximum width/height, by percent, or generate thumbnails.

Hold command to replace original. This option is not available when generating thumbnails. Generating thumbnails will add a .thumb to the filename (<filename>.thumb.<ext>)


Rotate images (clockwise)

Hold command to replace original.


Convert images to black and white.

Hold command to replace original.




December 15, 2016   

I’ve developed a new package for R known as rdatastore that is avaliable at cloudyr/rdatastore. rdatastore provides an interface for Google Cloud’s datastore service. Google Cloud Datastore is a NoSQL database, which makes provides a mechanism for storing and retrieving heterogeneous data. Although Google Datastore is not useful for storing large datasets, it has a number of useful applications within R. For example:

  • Saving and loading credentials for use with other services.
  • Caching data. This is implemented using datastore in my version of the memoise package.
  • Saving/loading universally used pieces of data (e.g. parameters, options, settings) across systems or between work/home.
  • Storage and retrieval of small (<10,000 row) datasets. Useful for integration of summary datasets.

The last two reasons are the primary motivation for developing rdatastore. Parallelized pipelines can simultaneously submit results to datastore (across many nodes or machines), and the results are obtainable for analysis within R. Settings can be updated on one machine and retrieved on others as well, obviating the need to modify virtual machines or scripts in many cases.


The datastore interface can be used to view and edit data.


  1. Setup a Google Cloud Platform and create a new project.
  2. Download the Google Cloud SDK. This provides a command line based gcloud command.
  3. Install rdatastore



authenticate_datastore("andersen-lab") # Enter your project ID here. rdatastore will authenticate using Oauth.

Storing Data


Individual entitites can be stored using commit(). You have to supply a kind (which is analogous to a table in relational database systems). You may optionally submit a name. Any additional arguments supplied are added as properties. Datatypes are inferred from R datatypes. For example:

commit(kind = "Car", name = "Tesla", wheels = 4) # Stores a new entity named 'Tesla'


kind name wheels
car Tesla 4

Important! Stick with basic datatypes like character vectors, integers, doubles, binary, and datetime objects. Not all datatypes are supported.

I designed rdatastore to make it easier to append data rather than overwrite it. This is abit against the grain as far as other datastore libraries go. For example:

commit(kind = "Car", name = "Tesla", electric = TRUE) # Stores a new entity named 'Tesla'

The entity will now be:

kind name wheels electric
Car Tesla 4 TRUE

If you want to overwrite the entity, you can use keep_existing = FALSE, and the original data will be wiped and replaced.

When using commit() you can omit the name parameter in which case Google datastore will autogenerate an ID for the entity. I’m not sure where this is useful. You won’t be able to look the item up without knowing its ID or by performing a query on the entities data.


Retrieve data by specifying its kind and name.

lookup("Car", "Tesla")
kind name wheels electric
Car Tesla 4 TRUE


You can query items using the Google Query Language (GQL). GQL is a lot like SQL.

# Lets commit a few more items
commit("Car", "VW", electric = FALSE)
commit("Car", "Honda", make = "Odyssey", wheels = 4)
commit("Car", "Reliant", make = "Robin", wheels = 3)

gql("SELECT * FROM Car")
kind name make wheels electric
Car Honda Odyssey 4 NA
Car Reliant Robin 3 NA
Car Tesla NA 4 TRUE

Notice that some some properties are NA because they were never specified.

We can also query specific properties - but this will only return entitites with those properties defined.

gql("SELECT make FROM Car")
kind name make
Car Honda Odyssey
Car Reliant Robin

You can also filter on properties with GQL:

gql("SELECT * FROM Car WHERE wheels = 3")
kind name make wheels
Car Reliant Robin 3
R  R Package