I’ve decided to take some time to learn new computer science skills. I’ve been so busy with work the past few years that I’ve had to put a number of compelling topics on the back burner.

I know that I’ve missed a lot because I am a mostly self trained programmer. In undergrad I decided to move away from my technical courses and towards courses in the liberal arts. Even though I have about 24 credits in computer science, math, and logic I ended up with a degree in sociology. I’ll always kick myself for that.

I had a 9 year career in the public sector. I was always the person least afraid of diving into new technology, and eventfully decided that was my passion. I then went to a 4 month boot camp where I was the only student to take both the RoR and JS tracks. After that I worked as a programmer for 4 years. During that time I took two part time code school courses on React.js. I’ve learned the most while I was on the job. I’ve also have read numerous: books, stack overflow threads, and lots of blog posts on a variety of topics.

As an engineer that comes from a non-traditional background, I find it important to continually level up. I want to fill in the gaps of my knowledge in areas I know. I’m always looking to expand my skills into new territory as well.

There are a lot of great introductory programs for complete newbies, including bootcamps, courses, and websites. However, there are not a lot of similarly structured pathways for people looking to move from an intermediate to a senior level. It seems like you have to take the initiative and design your own coursework to get there.

To that end, I’ve been decided to unpack these topics:

  • The Fundamentals of Computer Science
  • Microservices Architecture
  • Python and Flask
  • Diving Deep into Ruby and Object Oriented Programing
  • Refactoring and Code Reviews

I collected a few books on these topics over the course of the last 6 months and decided it was time to dive in and read and code during the month of December. Here is a brief overview of what I’m learning from them.

The Imposter’s Handbook

‘The Imposter’s Handbook’ by Rob Conery has been a great help in understanding the fundamentals of computer science.

The Imposter’s Handbook is an excellent primer for people who have been in the field for awhile and are looking to learn/revisit the building blocks of computer science.

Topics in the Imposter’s Handbook included, but are not limited to what you would find in a computer science program:

  • Computation
  • Complexity
  • Big O
  • Lambda Calculus
  • Machinery
  • Data Structures
  • Algorithms: Simple and Advanced
  • Compilation
  • Software Design Principles
  • Functional Programming
  • Databases
  • Essential Unix

I highly recommend this book to anyone that is looking to get an engineering job, and wants to level up or revisit what they have learned. Its a great way to prepare for an interview. I’ve learned so much for this book. The thing this book helped most might be surprising to some people. It helped me most by bringing me to the edge of my own knowledge, and showing me a map of the next set of questions I need to ask. I will be unpacking the lessons found in this book for the rest of my career. I plan on reading it again in the future.

Building Microservices

Building Microservices, Designing Fine-Grained Systems is an OReilly book by Sam Newman. Its a super pragmatic and approachable book about microservices. Its also language agnostic, which I think will make it more timeless.

In the past few years there has been a shift in the way systems have been designed. Web applications systems have been moving from code-heavy monolithic applications to smaller, self-contained microservices. But developing these systems brings a new set of challenges. This book takes a holistic view of the topics that system architects and administrators consider when building, managing, and evolving microservice architectures.

This book provides you with a firm grounding in the concepts while diving into current solutions for: modeling, integrating, testing, deploying, and monitoring your own services.

You’ll learn:

  • how microservices allow you to align your system design with your organization’s goals
  • options for integrating services with the rest of your system
  • how to take an incremental approach when splitting monoliths
  • how to deploy individual microservices through continuous integration
  • the complexities of testing and monitoring distributed services
  • how to manage security with user-to-service and service-to-service models
  • some of the headaches of scaling microservice architectures

I have been working with in systems that have a service oriented architecture (SOA). I’ve always wanted to know the thinking behind them. This book was a deep dive into this topic. Its great for anyone currently working with microservices/SOA or that wants to move their system towards a microservice style of architecture. In some ways I feel like I’ve been using the patterns and methods discussed in the book, and just never had a name or a theory behind them. This book helped me to understand why am doing what I was doing.

I’ve had the opportunity to use Javascript, Ruby, and SQL a lot in the past 4 years. Two languages that have been on my short list to learn are Python and Elixir. Seeing the popularity of Python really made me want to dive in. I even bought a Raspberry Pi! I also come from a research and evaluation background, and I’ve always been intrigued by all the Python libraries built for statistics and crunching big data. I am also intrigued by Python and all the machine learning libraries it has too, but that is way out on the horizon of what I plan on learning now. Lastly, after reading the “Building Microservices” book I wanted to know what a Python/Flask micro-service might look like.

Learn Python the Hard Way

Learn Python the Hard Way was great way to start to absorb Python. Its recently been updated for Python 3. This book helps you to quickly learn how to read and write Python by using it. I like its approach of getting behind the keyboard and actively typing code. Its a great jumping off point to dive into other books on Python.

The Flask Mega-Tutorial

The Flask Mega-Tutorial The Flask Mega-Tutorial is an overarching tutorial for beginner and intermediate Python developers. Flask is a micro-framework for Python based on Werkzeug, Jinja 2, and good intentions. This book was a touchstone for me when I was creating my own Flask App for my Raspberry Pi.

Some of you might remember that this book was written originally for Python 2. The tutorial has been thoroughly revised and expanded for its 2017 release. The concepts that are covered go well beyond Flask, including a wide range of topics Python web developers need to know when writing their own applications.

The Flask Mega-Tutorial teaches you about:

  • practically applying Python,
  • using modern web development practices,
  • all the nooks and cranny's of the Flask micro-framework,
  • and introduces you to lots of useful Pip libraries

I like that these books stripped away a lot of the magic involved behind the scenes. I feel like I learn best when things are transparent and I can see exactly what’s happening. When I first learned Ruby I took a similar approach with the language by using Sinatra. Both of these books were a great confidence booster when it comes to using Python. There are lots of wrinkles that make it different than JS and Ruby. But there also lots of similarities in the way they are used. Learning all this will help me to work with Python code in production, and more importantly utilize my Raspberry Pi~! Now that I have a better understanding of Python and Flask, I also feel more prepared to absorb Django.

Refactoring: Ruby Edition

Refactoring: Ruby Edition by Martin Fowler, Jay Fields, and Shane Harvie is an excellent introduction to the discipline of refactoring. I'm starting to see these lessons reappear everywhere in my code and in other learning materials.

The authors introduce a detailed catalog of more than 70 refactorings (is that word? lol), with guidance on when to apply each of them, step-by-step instructions for using them, and good (mostly good) examples Ruby code illustrating how they work.

This book helps you to understand the core principles of refactoring and the reasons for doing it Recognize “bad smells” in your Ruby code.

I like this book, its approach is very pedantic. It walks you through how to do all of the following things step-by-step:

  • Reworking bad designs into well-designed code.
  • Building tests to make sure your refactorings work properly
  • Understanding the challenges of refactoring and how they can be overcome
  • Composing methods to package code properly
  • Moving features between objects to place responsibilities where they fit best
  • Organizing data to make it easier to work with
  • Simplifying conditional expressions and make more effective use of polymorphism
  • Creating interfaces that are easier to understand and use
  • Generalizing code more effectively
  • Performing larger refactorings that transform entire software systems
  • Refactoring Ruby on Rails production code.

The lessons taught in this book are extend-able beyond just Ruby. One could apply them to most other object oriented languages including Python and JS (if you are using it with OO). I plan on re-reading this book many times. Indeed, I start my day, every work day anyway, by re-reading 2-3 sections of this book. the I want to eventually become an expert at understanding and applying the patterns these authors elucidate.

The Future

I plan to use these books as touch stones in the coming months. I will be moving into a more active phase where I will be coding phase during my sabbatical. I’m also sure I will be unpack and implementing the lessons I’ve learned through out my software engineering career.

I’d like to continue to learn more about these topics. Far off on the horizon, I can see Elixir/Phoenix, and machine learning being something I would like to master. I would also like to learn how to do more functional style of programming in Javascript (and even Ruby). I am happy to be in an industry where I am encouraged to learn new things everyday.


Since the dawn of time (at least since computer science became a field), hiring teams have been asking interviewees about how to concoct code to create a Fibonacci sequence. They wanted to know if an engineer had access to the concepts utilized in Dynamic Programming, and whether or not they could write algorithms that scaled well. They wanted to separate the wheat from the chaff.

The Fibonacci sequence (for all you pieces of chaff out there) is a series of numbers in which each number (i.e. Fibonacci number) is the sum of the two preceding numbers. A tiny sample of this series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 etc. This sequence is found in geometry all over the place, even in nature!

For the purposes of this blog posts, let’s say you are applying to a company that makes floor plans for houses shaped like pineapples. To make these houses you have to generate a number that helps them determine how many boxes of nails it will take to build a substructure of the building based on its height.


So how do you know if you are using performant code? You benchmark it. For our purposes we will run the code for our solutions out to the: 10th, 20th, 30th, and 35th position in the Fibonacci sequence.

We will capture the amount of time it takes to process the method using Ruby’s Benchmark library.

  require 'benchmark'
  Benchmark.bm do |bm|
    bm.report { Fib.new(iterations).run }

The Complex Way

The first way we will try to find solutions to this sequence is dirty, complex, and painful. To do it we will use a method that utilizes recursion.

class Fib
  attr_reader :max

  def initialize(max_count)
    @max = max_count

  def run
    i = 0
    while i <= max do
      i += 1


  def calculateFibAt(n)
    if n < 2
      calculateFibAt(n-2) + calculateFibAt(n-1)


This code works okay for 10 or less numbers in the sequence. But, if we go higher the Benchmark numbers start to look gnarly. Seems like our code might be getting to complex if we go much higher than 30…

seq.    user      system      total       real
10    0.000000   0.000000   0.000000   (0.000105)
20    0.010000   0.000000   0.010000   (0.006232)
30    0.500000   0.010000   0.510000   (0.505737)
35    5.670000   0.020000   5.690000   (5.738828)

Why is it so complex?

Man it takes a while to calculate out to the 35th position.

It has a complex Big O problem we would like to avoid, O(2^n).

You might be saying to yourself “Eeep, that boggles my tiny little chaff brain”, and “How will I ever get to be as smart as these computer science guys? Can we figure out how many times this method is being called so we can show how big of a problem it is?” Don’t worry we can figure it out, it only took me to about 30 years to find the time to sit down and do it. If I can, you can~!

The higher the number in the Fibonacci sequence, the more the method needs to run. Each step higher in the sequence requires the method to run many more times than the last. Because its recursive running Fibonacci to the 5th number in the sequence, you have to run fib(4), which needs to run fib(3) and fib(2)… and so on. To conceptualize O(n^2), if you increase n by 1, then you roughly added 2 * n units of time to the runtime of the code.

fib(5) =          fib(4)            +     fib(3)
                    |                        |
            fib(3)  +  fib(2)          fib(2) + fib(1)
              |          |               |        |
              |          1               1        1
            fib(2) + fib(1)
              |       |
              1       1

So how much of a problem is O^2n going to causes our application? How do I get the precise number of times calculateFibAt needs to run to get a given number in the Fibonacci sequence? I can add a Fibonacci counter to tabulate the number of times the method gets called!

class Fib
  attr_accessor :fib_count
  attr_reader :max

  def initialize(max_count)
    @fib_count = 0
    @max = max_count

  def run
    i = 0
    while i <= max do
      i += 1

    puts "The total number of times the calculateFibAt method was used was #{fib_count}."


  def calculateFibAt(n)
    self.fib_count += 1
    if n < 2
      calculateFibAt(n-2) + calculateFibAt(n-1)


To get to the 3rd place in the Fibonacci sequence it takes 5 iterations of the mehod, to get to 10th 453 iterations of the method, to get to 32 it would be … 18,454,894…. ooh snap this isn’t scaling well…

If this were code in production you would have some explaining to do…


Well lets say someone did push the code above to production… a year ago. You used to only build pineapples up to 50 feet which required 32 steps in the sequence. Now your boss just came to you. He is upset because everything was working fine until some smarty pants, probably named Chad, put 55 feet into the form that uses this code, and it took forever for the page to load the final value in the sequence.

Thanks Chad~!

How might we optimize this code so that its more performant?

We need to sort out this Big O problem and make it O(n). We want to be able to access a value from an array using its index is always O(1). You remember that blog post you read on using Dynamic Programming. This is possible if we can break the problem down into subproblems and then store there answer in an optimized substructure.

To do this we will want to store the value we calculate… We need to remember the solution to a subproblem (Fibonacci(1), Fibonacci(2), Fibonacci(3), Fibonacci(4), Fibonacci(5), etc.) so we don’t have to calculate it again and again recursively.

class FibFaster
  attr_reader :max

  def initialize(max_count)
    @max = max_count

  def run

  def fastercalculateFibAt(n)
    sequence = [0, 1]
    position = 2

    while position <= n do
      sequence.push(sequence[position -2] + sequence[position -1] )
      position += 1

    return sequence


Woah, sweet memory store in FibFaster#fastercalculateFibAt. Now we don’t have to recursively calculate each number in the sequence, we can store it here once and move on!

seq.    user      system      total      real
10    0.000000   0.000000   0.000000  (0.000022)
20    0.000000   0.000000   0.000000  (0.000014)
30    0.000000   0.000000   0.000000  (0.000034)
35    0.000000   0.000000   0.000000  (0.000030)

Look at how much quicker those Benchmark numbers are! Especially as we get further up the sequence…

You might be thinking “hey wait a minute you have a loop variable in there too!” and yes, that’s true, but with Big-O you’re more concerned about the nature of algorithm. In this case it’s simply O(n) because of the addition of the storage variables.

Your boss is happy that you sorted this out, but he’s always looking for way to save money by minimizing the amount of the AWS resources your company uses.

You might also be thinking, why do we store all the number in the sequence, we only want to show the user the last number in the sequence?! Maybe we can optimize this some more…

The Greedy way

class FibGreedy
  attr_reader :max

  def initialize(max_count)
    @max = max_count

  def run

  def greedycalculateFibAt(n)
    position_1 = 0
    position_2 = 1
    current = 0
    sequence = 2

    while sequence <= n
      position_1 = position_2
      position_2 = current
      current = position_1 + position_2
      sequence += 1

    return current


Woah its even faster now, than the last code. However, it doesn’t look like there is much of a difference in real time between the upper and lower end of the sequence. It still looks like you could save real computation resources by greedily grabbing the number you are you looking for, especially for the numbers higher up the sequence.

seq.    user      system      total       real
10    0.000000   0.000000   0.000000   (0.000012)
20    0.000000   0.000000   0.000000   (0.000013)
30    0.000000   0.000000   0.000000   (0.000012)
35    0.000000   0.000000   0.000000   (0.000013)

Wrapping Up

Dynamic programming amounts to breaking down an optimization problem into simpler sub-problems, and storing the solution to each sub-problem so that each sub-problem is only solved once. Now we can turn our attention to other problems, like shaving yaks.

Good job, we have separated the wheat from the chaff now. How does it feel be a golden kernel of wheat? Now you can grow into the solid software engineer you’ve always wanted to be.

Thanks Chad~!

The book ‘The Imposter’s Handbook’ by Rob Conery has also been a great help in understanding computer science fundamentals including unpacking concepts like “Big O” and “Dynamic Programming”.

Here is a great article that focuses on learning Dynamic Programming not by looking at the outcomes, or explaining the algorithm, but rather by using practical steps to find the algorithm. You can read it here: Demystifying Dynamic Programming: How to construct & code dynamic programming algorithms.


During my time at Hack Oregon I took an introduction courses on React.JS and ES6.

We covered a fair amount of territory over the course of 8 weeks. We met two times a week for 2.5 hours.

ES6: React:
  • Arrow functions and let keyword; Block scopes
  • Classes and inheritance; Default parameters
  • Destructured assignment
  • Generators; Iterators; Maps
  • Promises; Rest parameters; Sets
  • Spread operator; Template Literals
  • Classes
  • Enhanced object literals
  • Template strings
  • Let + const
  • Iterators + for..of
  • Generators
  • Introducing JSX
  • Rendering Elements
  • Components and Props
  • State and Lifecycle
  • Handling Events
  • Conditional Rendering
  • Lists and Keys
  • Forms
  • Lifting State Up
  • Composition vs Inheritance
  • Web Components
  • Higher-Order Components
  • Integrating with Other Libraries
  • Typechecking With PropTypes
  • Storing History

My Capstone

My capstone was to create a microblog using React and Firebase. To do this I created 15 dynamic and static components.

All of the work is served from GitHub pages, and that data is managed by Firebase.

Libraries Used

Server: Clientside:
  • Firebase
  • GitHub Pages
  • ES6
  • Radium
  • Re-base
  • React.JS
  • React-burger-nav
  • React-dom
  • React-router
  • Vanilla JS

Link: Github Repo

Link: React Blog

MongoDB (from hu-mongo-us) is an open-source document-oriented database option for modern developers. Classified as a NoSQL database program, MongoDB uses JSON-like documents instead of a typical relation database. MongoDB is amongst the most common NoSQL databases today.

You don’t work with rows and tables in Mongo. Instead, you work with documents and collections (sets of documents). Documents contain JSON hashes, so any data that can be represented as a hash can easily be stored using it.

First introduced in 2009, it was designed as a scalable document storage engine. Mongo is a schemaless database, meaning there’s no requirements nor enforcement about the structure of the data in a document. You can however add enforcements and validations to MongoDB using modules, other libraries, or your own code if you like.

In contrast to SQL data systems, Mongo can be scaled across a group of servers easily. Unlike typical SQL db’s, which usually exist on a more finite number of servers (which scale vertically), it takes awhile to sync the data across this group of servers. Mongo is able to handle massive data sets quickly and efficiently, but they do this by sacrificing immediate consistency. Standard SQL services are more accurate more quickly, but they can become a bottle-neck and have performance issues at high volumes. Many people also find it way more familiar on a conceptual level to interact with SQL ORMS during the development process. Lastly some data sets lend themselves to a more normalized manner, and because of this MongoDB might not be a good fit for that data.

Install MongoDB

Install MongoDB using Homebrew:

$ brew install mongodb

Start up a MongoDB services locally:

$ brew services start mongodb

If you want to stop it at anytime, you can:

$ brew services stop mongodb

Start a new rails project (I’m using 5.1.4), without using Active Record (Bye Felicia!)

$ rails new mongo-bicyles --skip-active-record

Setting up Mongoid in Rails

Add the Mongoid gem to the bottom of your Gemfile.

gem "mongoid", git: 'git@github.com:mongodb/mongoid.git'

Then run bundle install:

$ bundle install

Init your app as a repo:

$ git init...
$ git add remote origin...

Just like when you use a relational db like Postgres, or MySQL, you need a configuration file. Mongoid installs a custom Rails generator for us:

$ rails generate mongoid:config

Open up the file it creates, located at config/mongoid.yml and take a look. Check out the different options you could configure. I removed all the comments for the code block below. For this intro I’m not going to change anything in the config file.

      database: mongo_bicyles_development
        - localhost:27017
      database: mongo_bicyles_test
        - localhost:27017
          mode: :primary
        max_pool_size: 1

At this point, I like to run $ rails s -p 5678 and make sure the server kicks over without any issues. If it does, check in what we’ve got so far.

$ git add .
$ git commit -am "first commit for Rails using MongoDB project"

Using Mongoid in Rails

Since this is just a demo, we can use Rails’ scaffolding generators to get us started quickly. (p.s. I never do this in a production project)

$ rails generate scaffold bicycle brand serial_number manufacturer country

Let’s see what was created with the scaffold:

$ git status

Specifically lets checkout the model that was generated in: app/models/bicycle.rb

  class Bicycle
    include Mongoid::Document
    field :brand, type: String
    field :serial_number, type: String
    field :manufacturer, type: String
    field :country, type: String

That doesn’t look like a typical Active Record Model! MongoDB doesn’t have a database schema, so you will notice that there are no database migrations. If we want a new field, we could just add one in the model and add it to our views. Migrations can still be used, but they’re for data migration or transformation, not changing the underlying structure of the database.

Let’s explore what the rest of this data looks like via scaffold we just created.

$ rails s -p 5678

Visit the scaffolded route http://localhost:5678/bicyles There we see the good ole’ Rails CRUD forms.

Rails Crud Form

Understanding Documents in MongoDB

Let’s add a a bicycle, and look at the URL in the show page. The url contains a param with a weird set of numbers and letters http://localhost:5678/bicyles/5a387386f6ec12c032043f01 We’re used to seeing auto incrementing integer IDs there, the ones created for us by the database. Usually each item is added as a row to a table, which increments the row counter by 1. Each row has a locally unique sequential identifier.

MongoDB uses a gnarly looking alphanumerics hashes to create an object id like “5a387386f6ec12c032043f01”. This object id is always 12 bytes and is composed of: a timestamp, client machine id, client process id, and a 3-byte incremented counter.

Let’s peer under the hood of the MongoDB on the CLI, and check out a document. We can do this by first looking in our config/mongoid.yml for development.clients.default.database. This application uses the database: mongo_bicycle_development

  mongo mongo_bicycle_development

Even though there is no Structure Query Language, you sill have access to a special command language.

Let’s take a look at the collections in the database:

>show collections

You can find a record using this language, the format is db.[collection].find() This command returns all documents in a given collection. Since we only added one, that’s all we get.

{ "_id" : ObjectId("5a387386f6ec12c032043f01"), "brand" : "Kona", "serial_number" : "1245-ABCD", "manufacturer" : "We Made IT", "country" : "Tawain" }

Mongo stores data as JSON, and uses JavaScript here in the command line tool.

> typeof db
> typeof db.bicycles
> typeof db.bicycles.find
> function

Handy hint if we want to see what the code for a Mongo command looks like, we can call the function without parenthesis.

> db.bicyles.find
function (query, fields, limit, skip, batchSize, options) {
    var cursor = new DBQuery(this._mongo,
                             options || this.getQueryOptions());

        const session = this.getDB().getSession();

        const readPreference = session._serverSession.client.getReadPreference(session);
        if (readPreference !== null) {
            cursor.readPref(readPreference.mode, readPreference.tags);

        const readConcern = session._serverSession.client.getReadConcern(session);
        if (readConcern !== null) {

    return cursor;

When using MongoDB might be a good idea?

Do you have a very rapidly scaling systems with high volumes of traffic?

MongoDB is designed to be scalable and flexible, but remain familiar enough that application engineers can easily pick it up. Working with Mongo is mostly the same as working with a traditional relational database managements systems. You can’t however do server-side JOINs between two sets of data.

Because it can’t do server-side JOINs, the relationships between different objects (Users have Posts, Posts have Comments, Comments have Users) can be tricky, as you need to set up those relationships in the model. If you have a high volume but low level of complexity in data, or the access pattern of your application is a lot of READs and few writes, than this could be a good paradigm for you to embrace. If you have a data team and a group of engineers who are fluent in managing big data, then you should also probably embrace it (or another NoSQL alternative).

When might it be a bad idea?

How MongoDB encourages de-normalization of schemas. This might be a too much for other engineers or DBA’s to swallow. Some people find the hard constraints of a relational database reassuring and more productive. Having worked on projects with rapidly evolving data, I can see the need for a more structured approach.

Although sometimes restrictive, a database schema and the restraints it places on our data can be reassuring and useful. While MongoDB offers a huge increase in scalability and speed of record retrieval, its inability to relate documents from 2 different collections — the key strength of a RDBMS — makes it often not the best case for a CRUD-focused web application.

Because MongoDB is focused on large datasets, it works best in large clusters, which can be a pain to architect and manage.

It can also lead to a lot of headaches if not applied properly (e.g. you are trying to enforce principles used in RDBMS), or used to early (i.e. for a system that doesn’t have super high traffic, but might at some point in distant future). There are threads all over describing bad experience like this.

Some helpful resources for next steps

Mongoid Gem (the official MongoDB pages):

Why I think Mongo is to Databases what Rails was to Frameworks

Old School Ryan Bates Rails-cast on Mongoid

The book ‘The Imposter’s Handbook’ by Rob Conery has also been a great help in understanding databases and the theory and applied knowledge of big data.

UPDATE The client has recently replaced their entire custom RoR web application for an Adobe product. Subsequently, my map work is no longer there.

I worked on the team at Fine Design that recently launched the beautiful new Kimpton Hotels website. I had the pleasure of making the dynamic and static maps based on the Google Maps API. The experience was enlightening, and I wanted to share it with the world. Some of the map development process was easier than you might expect (thanks to some useful tools), some of it was way more complex than it should be, and some parts were even interesting.

What was useful?

There are a multitude of map platforms and libraries available on the interwebs. For this project we found the following to be most useful:

  • Google Maps API – The 500-pound gorilla in the room. No introduction is really needed. The reference tables alone print to 111 pages! It’s a little like reading a Soviet-era submarine manual.
  • Google Maps for Rails – A flexible gem that helps make developing maps for large sites easier. All the map objects created with this gem have customizable models and builder methods. This gem can be easily integrated with other JS libraries, and best of all, it works well with Rails controllers and views.
  • Geocoder – A gem that adds geocoding, reverse geocoding, and distance queries as Rails methods. Integrates well with the Google Maps for Rails (Gmaps4Rails) gem. We used this gem to create latitude and longitude data for all our destinations using street addresses. Lat. and long. data are the lifeblood of any map.
  • Infobox.js – An InfoBox behaves like a google.maps.InfoWindow, but it supports several additional properties, which allow for some fancy styling~!
  • Marker Clusterer Plus – This library creates and manages clusters for large amounts of markers, and adds lots o’ functionality and events to google.maps.Cluster objects.
  • Styled Map Wizard – Allows you to import and modify existing map styles in a wizard rather than needing to rebuild them from scratch each time. (Thanks Google for not including this feature out of the box ;)… )

Each of the libraries above seemed stable and fairly widely used as evidenced by the latest update date of the repository, or the number of related Stack Overlow questions. For this project we also made extensive use of jQuery and Underscore.js. Gmaps4Rails was by far the most useful gem. It helped to tie together all the libraries above, provided a helpful backbone to structure all of our maps’ related code, and brought with it an existing ecosystem with lots of map developers.

What was more complex than it should be? 

“Maps codify the miracle of existence.”

~Nicholas Crane, Mercator: The Man Who Mapped the Planet

Because of the mixture of the software libraries and languages, I felt like I was an archaeologist uncovering some ancient forgotten text.

The hardest part of creating these maps was knowing the name of the objects in a given context (see the crazy objects chains in the function below for an example). This made it harder to surface and integrate the most advanced features we wanted to use in these libraries.

At one point the Project Developer said, “So basically it’s like converting Russian and French into Latin, so they can speak to each other.” To which I replied, “Yeah, and then into Esperanto so the end user can view them in their browser.” Because of this, turning a comp or a deliverable into reality was also occasionally way more complex than one might expect.

function myClick(id) {
  if (markers[id] !== undefined && cluster_markers !== {}) {
    if(markers[id].getServiceObject() && !(cluster_markers[markers[id].getServiceObject().position.lat()])) {
      google.maps.event.trigger(markers[id].getServiceObject(), 'mouseover');
    } else if(cluster_markers[markers[id].getServiceObject().position.lat()] && (handler.map.getServiceObject().getZoom() == 4)) {
      cluster_trigger = cluster_markers[markers[id].getServiceObject().position.lat()];
      google.maps.event.trigger(handler.clusterer.getServiceObject(), 'mouseover', cluster_trigger);
    } else if(handler.map.getServiceObject().getBounds().contains(markers[id].getServiceObject().getPosition())) {
      google.maps.event.trigger(markers[id].getServiceObject(), 'mouseover');

Once a feature was unveiled, new feature requests were also often uncovered (see the next section). This usually required extension of some obscure functionality onto a different object and a visit to the deepest depths of the api catacombs (Eeek, it’s freaky down there!).

What was interesting?

During the build, there was a feature request to create an Infowindow for markers. Once this was in place, it looked great. This lead to a request for clusters to act and look just like those for regular map markers. Simple right? Surprise, it was a total-pain-in-the-butt! Documentation and Stack Overflow tickets were pretty sparse in this area.

I built a custom solution byte-by-byte that ended up being about 85 lines of code (which I have since refactored down to 64). However, I genuinely enjoyed the process of figuring out which components to use and welding them all together. I had a deep feeling of accomplishment when I pulled it all off, and someone else might even find my work to be useful in the future (be still, my nerdy heart)!

Lastly, because of the GMaps4Rails gem, I had the opportunity to use Coffeescript. I found out that I actually really enjoy using Coffeescript, and I don’t feel guilty about saying it anymore! It was fun, concise, and it compiled into something that was often more efficient than the regular ol’ JS that I write. Check out Brian Mann’s recent presentation on overcoming the fear of it.

What if I want to make maps too?

If you’re creating Google Maps in Rails you should definitely consider using the tools above. Finding the tools and learning the ‘language’ each of them uses was half the battle. Learning where their gaps are and programming in them was the other half. There are a million other tools out there for Google maps, and you should take your time to check them out too (one of my other favorites is: SnazzyMaps).

Expect browser incompatibility issues to happen. For example, it took a while to figure out why svg marker’s weren’t showing up in Firefox (but were in Safari, Chrome, or IE 9-11). If you want to ensure cross browser compatibility, build ample troubleshooting and testing time into your project estimates. Stackoverflow is also your best friend in this process.

Last and perhaps most importantly, work with the designer while they are creating their maps. Before a project starts, try to steep yourself in all the libraries above. Tease out what functionality is implicitly/explicitly contained in their comps. Then figure out if that exists already in an existing library or needs to be custom made. This can help to uncover and avoid hidden pitfalls, which might make developing maps easier.