A big list of favorites

November 29, 2016   

Here it is! My favorite things in life across all domains. This is a work in progress, but hopefully you’ll find one (or a few) things you like and add it to your life. It’s a bit sparse currently, but it will fill in over time.

Note: There are no referral links here and I am not being paid to advertise anything here. Any companies/products listed have earned it.



  • Homebrew - A phenomenal package manage. Use with homebrew/science!
  • Autojump - Jump among directories by typing their j and their name. Even works if you type it incorrectly! Install with brew install autojump.
  • pyenv - An easy way to manage multiple installations of python, and set which versions to open globally, locally, or by directory. Install with brew install pyenv.

Software (OSX)

  • Dropbox - The best backup/syncing solution. I pay for a subscription. I’ve used Box and Google Drive as well. Both are inferior.
  • Transmit - FTP Client.
  • Sublime Text - The best text editor. Extend its functionality with package control.
  • Alfred - Like spotlight but with a lot more functionality. Workflows extend its functionality considerably. See the ones I’ve written!
  • Sequal Pro - A MYSQL GUI. The best GUI for a database I’ve seen.


  • Github - A great place to work on projects using git.
  • Python - My favorite all-purpose programming language.


Below I list some of my favorite R packages.


  • Jekyll - Static sites
  • Flask - Python framework
  • peewee - A very easy to use ORM for simple projects.



iOS Apps

  • Reeder - A great RSS reader.
  • Strava - Fitness tracker. I’ve used Runkeeper in the past. It’s worth switching. If you want to switch fitness apps and not lose everything, check out tapiriik.



Tech News


  • Vanguard - Retirement accounts and investing.



  • Pinboard - Simple bookmarks.
  • tapiriik - Sync fitness data across fitness tracking services.




Wikipedia Pages

These are mostly a roundup of interesting ideas/facts/concepts I have come acorss.

Guitar Printouts

August 17, 2016   

I put these guitar-related printouts (A chord diagram sheet and a fretboard diagram sheet) together years ago:



August 4, 2016   

Search Quiver from Alfred! Quiver-alfred quickly constructs a database of your notes for fast and easy querying.



Type qset to set your quiver library location. Quiver-Alfred constructs a database of your notes to make querying as fast as possible. The database should refresh once every hour and should only take a few seconds to create.

Type q to use!

You can search tags by hitting q #.

Browse Notes within notebook:

Full Text Search using sqlite:


memoise - Caching in the cloud

July 27, 2016   

Memoisation is a technique wherein the results of functions are cached based on inputs. For example, the following function calculates the fibonnaci sequence in R.

Note that this is a rather innefficient way of calculating values of the fibonnacci sequence. However, it is a useful example for understanding memoisation. The following code uses Hadley Wickhams package memoise.

In the above example, the memoise() function generates a memoised function, which will automatically cache results. If the function is run again with the same parameters, it will return the cached result. Implementing memoisation can significantly speed up analysis when functions that take time to run are repeatedly called.

What if you are running similar analyses within a cluster environment? The ability to cache results in a centralized datastore could increase the speed of analysis across all machines. Alternatively, perhaps you work on different computers at work and at home. Forgetting to save/load intermediate files may require long-running functions to be run again. Further, managing and retaining intermediate files can be cumbersome and annoying. Again, caching the results of memoised function in a central location (e.g. cloud-based storage) can speed up analytical pipelines across machines.

Recently I’ve put some work into developing additional caches for the memoise package available here. This version can be used to cache items locally or remotely in a variety of environments. Supported environments include:

  • R environment (cache_local)
  • Google Datastore (cache_datastore)
  • Amazon S3 (cache_aws_s3)
  • File system (cache_filesystem; allows dropbox, google drive to be used for caching)

There are a few caveats to consider when using this version of memoise. If you use the external cache options, it will take additional time to retrieve cached items. This is preferable in cluster environments where syncing files across instances/nodes can be difficult. However, when working at home/work, using locally synced files is preferable.




Google Datastore

Amazon S3

R  R Package  gist

Automatically construct / infer / sense bigquery schema

December 30, 2015   

Bigquery is a phenomenal tool for analyzing large datasets. It enables you to upload large datasets and perform sophisticated SQL queries on millions of rows in seconds. Moreover, it can be integrated with R using Bigrquery, which can be used to interact with bigquery using some of the functions in dplyr.

It is easy to upload datasets to bigquery, although it requires you to specify a schema. If you have a lot of columns in a dataset this can be a pain to do manually - so I wrote a script to automate the process. The script automatically determines the variable types within the first 500 rows of a tab-delimited dataset. To get started, download the python script below and save it as schema.py.


Save the gist as a script and run it as follows:

python schema.py <file>

The script supports plain text and gzipped files (which bigquery can load).

Output Example


Note that support for RECORD and TIMESTAMP fieldtypes is not supported.

Programming  bigquery