Here it is! My favorite things in life across all domains. This is a work in progress, but hopefully you’ll find one (or a few) things you like and add it to your life. It’s a bit sparse currently, but it will fill in over time.
Note: There are no referral links here and I am not being paid to advertise anything here. Any companies/products listed have earned it.
Type qset to set your quiver library location. Quiver-Alfred constructs a database of your notes to make querying as fast as possible. The database should refresh once every hour and should only take a few seconds to create.
Memoisation is a technique wherein the results of functions are cached based on inputs. For example, the following function calculates the fibonnaci sequence in R.
Note that this is a rather innefficient way of calculating values of the fibonnacci sequence. However, it is a useful example for understanding memoisation. The following code uses Hadley Wickhams package memoise.
In the above example, the memoise() function generates a memoised function, which will automatically cache results. If the function is run again with the same parameters, it will return the cached result. Implementing memoisation can significantly speed up analysis when functions that take time to run are repeatedly called.
What if you are running similar analyses within a cluster environment? The ability to cache results in a centralized datastore could increase the speed of analysis across all machines. Alternatively, perhaps you work on different computers at work and at home. Forgetting to save/load intermediate files may require long-running functions to be run again. Further, managing and retaining intermediate files can be cumbersome and annoying. Again, caching the results of memoised function in a central location (e.g. cloud-based storage) can speed up analytical pipelines across machines.
Recently I’ve put some work into developing additional caches for the memoise package available here. This version can be used to cache items locally or remotely in a variety of environments. Supported environments include:
R environment (cache_local)
Google Datastore (cache_datastore)
Amazon S3 (cache_aws_s3)
File system (cache_filesystem; allows dropbox, google drive to be used for caching)
There are a few caveats to consider when using this version of memoise. If you use the external cache options, it will take additional time to retrieve cached items. This is preferable in cluster environments where syncing files across instances/nodes can be difficult. However, when working at home/work, using locally synced files is preferable.
Bigquery is a phenomenal tool for analyzing large datasets. It enables you to upload large datasets and perform sophisticated SQL queries on millions of rows in seconds. Moreover, it can be integrated with R using Bigrquery, which can be used to interact with bigquery using some of the functions in dplyr.
It is easy to upload datasets to bigquery, although it requires you to specify a schema. If you have a lot of columns in a dataset this can be a pain to do manually - so I wrote a script to automate the process. The script automatically determines the variable types within the first 500 rows of a tab-delimited dataset. To get started, download the python script below and save it as schema.py.
Save the gist as a script and run it as follows:
python schema.py <file>
The script supports plain text and gzipped files (which bigquery can load).