The Coding Mant.is

Smashing Through Code

CLI Tricks: Navigating Around the Prompt — 28-May-2015

CLI Tricks: Navigating Around the Prompt

It’s not uncommon to need to make changes to a command after entering it in the command prompt. For example, perhaps you need to reuse a command from the history but with a small change or perhaps you need to fix a typo.

Luckily, Unix command prompts support both Emacs (default) and Vi(m) key bindings for quickly arriving at the section you need to make small edits.

Let’s take an example where you want to download a light AWS stemcell (~6 KB):

curl -L -J -O https://bosh.io/d/stemcell/bosh-aws-xen-ubuntu-trusty-go_agent

There is a typo in the above line: the path should include d/stemcells not d/stemcell. Of course this text is awkwardly placed, so what’s the best way to fix it?

Method E: Emacs Bindings

The key bindings of interest here are ^a, ^e, esc+f, and esc+b where ^ represents the ctrl key. Try the following:

  1. Copy the curl statement as-is, typo and all, into your command prompt
  2. Enter ^a to go to the beginning of the line
  3. Enter esc+f until you reach the end of the world stemcell (should be 9 times).
  4. Type in the s so stemcell becomes stemcells

At this point you can either enter ^e to go to the end of the line or Enter/Return to execute the command. Although we didn’t use it in this example, just like esc+f goes forward a word at a time, esc+b goes backward.

Caveat: make sure you don’t just hold down the esc key and keep hitting b or f. If you do, you’ll just typing b’s and f’s. You must hit esc+b or esc+b each time you want to move backward/forward a word.

Method V: Vi(m) Bindings

Since Emacs is default, in order to switch into Vi mode first run:

set -o vi

As with the previous exercise, copy the curl command into terminal with the typo. The key bindings of interest here are: ^, $, b, e, and w.

  1. Copy the curl statement as-is, typo and all, into your command prompt
  2. Hit esc to go into command mode
  3. Enter ^ to go to the beginning of the line
  4. Enter 16e, which will place the cursor at the end of the world stemcell.
  5. Hit a to enter insert mode next to the last l in stemcell
  6. Type in the s so stemcell becomes stemcells

At this point you can either use esc (to go back into command mode) and enter $ to go to the end of the line or Enter/Return to execute the command. Note that the command can be run in either command or insert mode.

If you wish to return to Emacs mode at any time, enter:

set -o emacs

On S&W blog

PGConf 2015 — 17-April-2015

PGConf 2015

I attended PGConf NYC 2015 on March 26-27. Going to conferences is awesome, in my opinion, because there is an amazing collection of minds available. Minds interested in the same topic you are, which in the nerdosphere(TM) is sometimes hard to come by.

Before getting started discussing my favorite talks, I would like to send a quick thank you to the conference sponsors, organizers, and venue. Organizing any conference, especially a successful one, is a lot of hard work. Double props to both the organizers and the venue (New York Mariott Downtown) for being able to cater to people with special diets – in addition to the executive chef Ed Pasch speaking to me personally to check on my dietary restrictions/allergies and making a meal safe for me to eat, I also saw a kosher meal and a vegan meal for two other guests. I’m so used to having to travel with my own food that this was a very pleasant surprise.

My Favorite Talks

Now for the good stuff! Amongst all the amazing talks I attended on both days, I did have a a few favorites:

  • Favorite “inspiration” talk: Building a ‘Database of Things’ with Foreign Data Wrappers
  • Favorite “new feature” talk: JSON and PostgreSQL, the State of the Art
    • Includes a comparison between PG 9.4 and MongoDB 3.x
  • Favorite “tricks” talk: A TARDIS for your ORM – application level time travel in PostgreSQL
  • Favorite “upcoming feature” talk: Row Level Security

Since all of these talks were very informative, and there are several, I’m only going to scratch the surface of what I enjoyed from each. I will be linking to the lecture slides as available and I encourage everyone to take a look.

Building a ‘Database of Things’ with Foreign Data Wrappers

First stop: this talk I mentally dubbed the “fun” talk, because nothing beats having someone control a lights display with PG commands at a PG conference. In order to make this work, Database of Things speaker Rick Otten used Philips Hue light bulbs and, of course, PG’s foreign data wrappers (FDWs). As a point of interest, he used multicorn, which is a PG extension for developing FDWs in Python.

Briefly: the purpose of the talk was to explore the usefulness of FDWs in PG. For the uninitiated, FDWs are used to access data from a “foreign” database. For example, you may need data from an Oracle database, or even a flat file. To access the data you would use the appropriate FDW. FDW read-only support was added in PG 9.1 and write support was added in 9.3 in compliance with the SQL/MED standard.

Why I liked this talk: App ideas! I’m a big fan of “the internet of things” and making our devices “smarter”. For example: you could write a “smart house” application that does things like open your garage when your car approaches (no garage door opener button required), turns on the light in the garage, and then turns on the light of the room you would enter in your house. You could also program some other basics, like light timers and such. Pranks would be awesome too – turning lights on and off, changing their color, making “annoying” sounds. Convince your [insert close friend/relative here] that his/her house is haunted for a day! Or, more benignly, make really awesome outdoor holiday displays (I’m looking at you Halloween and Christmas).

Lecture slides are in the extra directory in the project repo on GitHub. The PG wiki also has a well written article about FDWs here.

Speaker on Twitter

JSON and PostgreSQL, the State of the Art

I really like this talk because it touches on something I am working on as a learning exercise – JSONB.

JSONB is the “JSON Binary” datatype was introduced in version 9.4 (the latest release as of this post). In a side project I am working on, we are actually working on using the database for more. In particular, we are loading JSON data from an API directly into the database and then manipulating that data in PG for use in various tables. The goal of this is to really grok how to maximize PG’s potential and performance with stored procedures.

Something this talk showed that I was not previously aware of is that JSONB containment (@> / <@) is not recursive. Here are a couple example statements from the slides:

postgres=> SELECT '{"a": 1, "b": 2}'::jsonb @> '{"a": 1}'::jsonb;
postgres=> SELECT '[1, 2, 3]'::jsonb @> '[1, 3]'::jsonb;
postgres=> SELECT '{"a": {"b": 7, "c": 8}}'::jsonb @> '{"a": {"c": 8}}'::jsonb;
postgres=> SELECT '{"a": {"b": 7}}'::jsonb @> '{"b": 7}'::jsonb;
postgres=> SELECT '{"a": 1, "b": 2}'::jsonb @> '"a"'::jsonb;

Of these statements, the first three return true and the last two return false. I found this interesting because I initially assumed all five statements would return true and I could definitely see myself making an error implementing this.

Lecture slides are available here.

Speaker on Twitter

Secondary concept: How does MongoDB compare to postgreSQL?

The JSONB talk included something else that I found interesting: a performance comparison between PG and Mongo.

Historically, I’ve heard a lot of negativity about Mongo. In fact, the few times I’ve worked with Mongo 2.x I’ve found it to be quite painful, for example I’ve run into issues with Mongo silently failing on more than one occasion. Hard to troubleshoot. On top of that, typically I’ve seen that PG outperforms Mongo 2.x by quite a large margin by reviewing posts like these.

To compare how PG and Mongo handled JSON and JSONB transactions, this speaker did several tests with both 4 and 200 JSON fields using MongoDB 3.x. Although there are some tests where PG is still reigns supreme:

PG Relational

There are several cases where Mongo is comparable, or even exceeds, PG performance:

PG Mongo JSON

PG Mongo No Index

PG Mongo GIN

For the tests and explanations, take a look at the slide deck, starting on slide 75.

A TARDIS for your ORM – application level time travel in PostgreSQL

As a Doctor Who fan, I just want to take a moment to say that if the talk didn’t live up to my excitement from the title alone, I would have been disappointed.

Luckily, it did!

The challenge this solution was designed to solve was reproducing report runs in a system that held a lot of statistical data that included personal information. More granularly, the solution was originally engineered to be able to reproduce incorrect reports.

By example: you have a row of data that was entered as {Jane, Doe, Dec-25-1986, F, Security Guard} on Jun 1 2015, but was then corrected to {Jayne, Doe, Dec-25-1986, M, Security Guard} on Jun 20 2015 (too soon?). All reports run between Jun 1 and Jun 19 2015 would include the first result, and all reports thereafter would include the corrected result. If some day in 2016 you needed to replicate the report as it was run on Jun 15 2015, you would need to have the uncorrected result returned.

So, what did they do?

They built a solution that included PG (of course!) as well as JBoss/Hibernate. In order to keep their old data they made history tables and included a column with a range type to keep track of when specific data points were valid. In order to update the tables, they wrote a series of trigger functions that handle whether data/tables are being updated or deleted and update the corresponding date ranges. Then they created a “time travel” schema and a used a schema search setting to determine what views (autogenerated) are returned. To determine which reports contained a specific person, they used full reporting query logging with “time travel”.

Some caveats/requirements:

Lecture slides are here.

Speaker on Twitter

Row Level Security

I’m really excited about this feature, confirmed for release with PG 9.5. The scoop on this one is that this is a security feature that can restrict what rows are returned in a dataset. This is done by creating a security policy (CREATE POLICY) and applying it to your tables. Thinking forward, the team has also made it possible to add a security policy to an existing table (using ALTER TABLE, ENABLE ROW LEVEL SECURITY).

I think the main reason this excites me is because of a past job I had working as a DB admin at a company that handled data for clinical data trials (read: patient data, HIPAA, paperwork, security policies up the wazoo). As a point of interest that company uses Ingres, which I’ve found amuses PG fans more so than most ;)

Now that I’m thinking about them, I could definitely see this feature being useful in that setting (if they were using postgres). For example, let’s say you have 3 groups tracking breast cancer data. Maybe they are gathering the same data, so you could have one table but you want to make sure that no one can see the others’ data. Enter RLS. You could restrict which rows are available to each group, so that they only see their own data when they run queries :D

Although the link to the PGConf Row Level Security presentation isn’t available yet, he did give a similar presentation a few months ago. Those slides are here. The slides are filled with examples for how to CREATE, UPDATE, DROP/DELETE, ENABLE, DISABLE, etc. – so I highly recommend reviewing them. You may also want to reference the PG developer docs on their wiki here.

Speaker on Twitter

Written with StackEdit.
Link to work post

Introduction to Cloud Foundry’s Health Monitor — 17-December-2014

Introduction to Cloud Foundry’s Health Monitor

What is the Health Monitor?

The purpose of the Health Monitor (HM) is to monitor all existing apps and ensure that the appropriate number of instances are running. If there is a discrepancy, then the HM will either prompt the Cloud Controller to start new instances (if there are too few) or stop existing instances (if there are too many).

Some Background

The current release of the Health Monitor is called the Health Monitor 9000 (usually seen as HM9000 or HM9K). The HM9K is a complete rewrite of the Health Monitor. According to the release post, the maintainers’ main goals for the rewrite were to:

  • Solve the then-issue that the HM was a single point of failure
  • Make use of newer technologies (Go, etcd instead of Ruby)
  • Get into the practice of replacing existing components with rewrites

The switch to Go/etcd provides the key difference between the original HM and the HM9K: the ability to store information about the state of the environment in a database instead of in component memory. This not only allows multiple instances of the HM to work concurrently, but also keeps the information available in the event of a single component failure.

How does the HM work?


Source: home grown diagram made with Gliffy.

  • Requests the ideal state of the droplets from the Cloud Controller
  • Stores the ideal state in etcd
  • Listens to droplet heartbeats over NATS to obtain the actual state of the droplets
  • Sends droplet START/STOP requests over NATS
  • Moves applications to DEAs during a rolling deploy

Since the HM9K has a great deal of control, it’s been limited to only take actions if the information it is using is “fresh”. Specifically:

  • The actual state is “fresh” if the HM9K has an active NATS connection, is regularly receiving DEA heartbeats, and is able to successfully store the actual state into etcd.
  • The desired state is “fresh” if the HM9K successfully downloads the desired state from Cloud Controller (without timing out) and successfully stores the information into etcd.

If either the actual or desired states are not fresh, the HM9K will stop taking any actions.

What if something goes wrong?

Tips for troubleshooting the HM9K from Onsi @ Pivotal (who is now famous for his talk about Diego):

  • Make sure hm9000_noop is set correctly. When set to false the Cloud Controller will use the HM9K, but if set to true it will use health_manager_next. (which was the predecessor to the HM9K).
  • Verify that etcd is not in a bad state. Typically etcd will only enter a bad state during deployment rather than later. If etcd is in a bad state, you will notice that the HM9K will no longer be able to store current information about the state of the environment. When this happens you should restart etcd by doing the following:
    • bosh ssh into the etcd node
    • Run monit stop all
    • Delete the etcd data directory under /var/vcap/store
    • Repeat for all etcd nodes
    • Once complete for all nodes, restart them by running monit start all.
  • Verify that Cloud Controller is able to send the desired state in time*. (Recall that if the data is not sent/received in time, then the state loses freshness and the HM9K will no longer take any action.) Check the logs of the desired_state_fetcher to view the status . Typically this is only a problem when the Cloud Controller is under heavy load.
    • * What “in time” is and whether you can change it depends on your version of Cloud Foundry. As of Jul 24 2014 users can control the timeout value by setting the hm9000.fetcher_network_timeout_in_seconds parameter in the manifest. The default value is currently 30 seconds. Prior to this, the timeout was set to 10 seconds and was not user configurable.
  • Check the load on etcd. According to the README, if the DesiredStateSyncTimeInMilliseconds exceeds ~5000 (5 seconds) and the ActualStateListenerStoreUsagePercentage exceeds 50-70% then clustered etcd may be unable to handle the load. The current workaround for this issue is to run a single HM9K node instead of a cluster.

View the contents of the store

If you bosh ssh into HM9K VM you can run:

/var/vcap/packages/hm9000/hm9000 dump --config=/var/vcap/jobs/hm9000/config/hm9000.json

This will fetch both the desired and actual state from etcd and print out a pretty-formatted birds-eye view of all running & desired apps on your cloud foundry installation. You can also get raw JSON output by passing dump the --raw flag.

I have included some example output below. This is for a CF instance that has a single application with five instances:

$ /var/vcap/packages/hm9000/hm9000 dump --config=/var/vcap/jobs/hm9000/config/hm9000.json
Dump - Current timestamp 1418830322
Store is fresh
====================

Guid: 9d7923df-1e45-4201-943b-0cf2ec086ee9 | Version: 561f3b0c-8beb-4faa-af48-07b4f8ad379e
  Desired: [5] instances, (STARTED, STAGED)
  Heartbeats:
    [4 RUNNING] 2e9e4142946743f2ba5372cc1c29aa86 on 0-1c4
    [1 RUNNING] 5989d49130c8440aa86abf22c5f10215 on 0-1c4
    [2 RUNNING] 0673b4cc39464025b3d6ae6be5a9f285 on 0-1c4
    [3 RUNNING] ff875188ca73454aa67a0acf689b3e29 on 0-1c4
    [0 RUNNING] 017b2076a4024416bb8cee7dd86df59c on 0-1c4

RAW output for the above:

$ /var/vcap/packages/hm9000/hm9000 dump --config=/var/vcap/jobs/hm9000/config/hm9000.json --raw
Raw Dump - Current timestamp 1418830336
/hm/locks/Analyzer [TTL:9s]:
    9eefca86-cb36-4407-7af7-1d0b7a43df22
/hm/locks/Fetcher [TTL:8s]:
    66a2c18e-8095-4cd8-5e86-0a5a3d998c3e
/hm/locks/Sender [TTL:10s]:
    3fbb6b85-d608-433d-51b5-cd35d57e2150
/hm/locks/Shredder [TTL:8s]:
    5e699aa8-8af1-4013-7313-f57bfc9f72ce
/hm/locks/evacuator [TTL:7s]:
    6c39a226-bc89-40b5-67c9-b1644184ffe8
/hm/locks/listener [TTL:9s]:
    2be6e723-6edc-4b18-62f8-aec200be161d
/hm/locks/metrics-server [TTL:6s]:
    43014106-65db-4f36-5eb8-98a617108529
/hm/v4/actual-fresh [TTL:22s]:
    {
      "timestamp": 1418758444
    }
/hm/v4/apps/actual/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e/017b2076a4024416bb8cee7dd86df59c [TTL: ∞]:
    0,RUNNING,1418830256.8,0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/apps/actual/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e/0673b4cc39464025b3d6ae6be5a9f285 [TTL: ∞]:
    2,RUNNING,1418830258.6,0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/apps/actual/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e/2e9e4142946743f2ba5372cc1c29aa86 [TTL: ∞]:
    4,RUNNING,1418830258.2,0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/apps/actual/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e/5989d49130c8440aa86abf22c5f10215 [TTL: ∞]:
    1,RUNNING,1418830258.6,0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/apps/actual/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e/ff875188ca73454aa67a0acf689b3e29 [TTL: ∞]:
    3,RUNNING,1418830258.5,0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/apps/desired/9d7923df-1e45-4201-943b-0cf2ec086ee9,561f3b0c-8beb-4faa-af48-07b4f8ad379e [TTL: ∞]:
    5,STARTED,STAGED
/hm/v4/dea-presence/0-1c470b76e6a34de396016de6484ea9e1 [TTL:22s]:
    0-1c470b76e6a34de396016de6484ea9e1
/hm/v4/desired-fresh [TTL:112s]:
    {
      "timestamp": 1418829728
    }
/hm/v4/metrics/ActualStateListenerStoreUsagePercentage [TTL: ∞]:
    0.00295
/hm/v4/metrics/DesiredStateSyncTimeInMilliseconds [TTL: ∞]:
    1.07322
/hm/v4/metrics/ReceivedHeartbeats [TTL: ∞]:
    13.00000
/hm/v4/metrics/SavedHeartbeats [TTL: ∞]:
    13.00000
/hm/v4/metrics/StartCrashed [TTL: ∞]:
    0.00000
/hm/v4/metrics/StartEvacuating [TTL: ∞]:
    0.00000
/hm/v4/metrics/StartMissing [TTL: ∞]:
    0.00000
/hm/v4/metrics/StopDuplicate [TTL: ∞]:
    0.00000
/hm/v4/metrics/StopEvacuationComplete [TTL: ∞]:
    0.00000
/hm/v4/metrics/StopExtra [TTL: ∞]:
    5.00000

For more information, please see the HM9K README.

An example issue and its resolutions from VCAP Dev

Issue: cf apps shows 0/N apps running

Specifically, the output shows:

Showing health and status for app rubytest in org org / space space as user...
OK

requested state: started
instances: 0/1
usage: 1G x 1 instances
urls: rubytest.mydomain.dev

     state     since                    cpu    memory        disk
#0   running   2014-03-20 12:00:19 PM   0.0%   76.2M of 1G   58.7M of 1G

The logs show some bizarre behavior as well:

{"timestamp":1395331896.365486,"message":"harmonizer: Analyzed 1 running 1 missing instances. Elapsed time: 0.009356267","log_level":"info","source":"hm","data":{},"thread_id":16810120,"fiber_id":24014660,"process_id":6740,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/harmonizer.rb","lineno":229,"method":"finish_droplet_analysis"}

Followed later by:

{"timestamp":1395332173.518845,"message":"harmonizer: droplet GC ran. Number of droplets before: 2, after: 1. 1 droplets removed","log_level":"info","source":"hm","data":{},"thread_id":16810120,"fiber_id":24014660,"process_id":6740,"file":"/var/vcap/packages/health_manager_next/health_manager_next/lib/health_manager/harmonizer.rb","lineno":184,"method":"gc_droplets"}

So although initially the number of running applications is incorrectly identified, at some point the HM9K does clean up the extra instance.

Solution: The poster of the above indicated that the solution to his issue was that the Cloud Controller was working with the incorrect Health Monitor – health_manager_next instead of HM9K.

A later poster was using the HM9K and reported an experience with the same behavior. By running cf apps he saw output similar to the above for all his apps, and when he tailed /var/vcap/sys/log/hm9000/hm9000_listener.stdout.log there were no logged heartbeats.

He stated that he discovered that the issue was caused by his etcd nodes becoming unclustered. Since he could not directly resolve the issue, he followed the steps for what to do when etcd entered a bad state and successfully re-created the cluster.

Resources

[From my work blog here.]

Running Cloud Foundry Locally with BOSH Lite —

Running Cloud Foundry Locally with BOSH Lite

Want to play with Cloud Foundry without using TryCF (requires AWS) or setting up a trial account with one of the PaaS providers out there (e.g. PWS)? Why not set it up on your own laptop?

Getting Started

The 411 on my laptop:

  • 2012 Retina Macbook Pro
  • 16 GB RAM
  • 768 GB SSD
  • 2.7 GHz i7 processor
  • Mac OS 10.10.1

FYI: I have already installed the CF CLI tools on my laptop, so although I will explain how to install the CF CLI tools I will not be installing it again at this time.

It’s worth mentioning that I did also try running Cloud Foundry on a different Macbook with 8 GB of RAM with limited success. So for the RAM at least I would recommend having 16 GB+. This will provide enough memory for Cloud Foundry to run more comfortably alongside the system processes.

Ruby, Go, Vagrant, and VirtualBox

Make sure that you have:

  • the latest stable release of Go
  • the latest stable release of Ruby
    • Optional: RVM
      Not directly required for Cloud Foundry, but this will come in handy if you need to install/manage more than one version of Ruby.
  • the latest release of Vagrant
  • the latest release of VirtualBox

Before proceeding, check the installed versions using go version, rvm list (or ruby --version if you do not have RVM), vagrant --version, and vboxmanage --version. I am currently running the latest stable releases for all the above, in addition to version 1.9.3 for Ruby.

$ go version
go version go1.3.3 darwin/amd64

$ rvm list

rvm rubies

   ruby-1.9.3-p551 [ x86_64 ]
=* ruby-2.1.5 [ x86_64 ]

# => - current
# =* - current && default
#  * - default

$ vagrant --version
Vagrant 1.6.5

$ vboxmanage --version
4.3.18r96516

Installing BOSH Lite

In order to run Cloud Foundry you must first install BOSH. To provide some basic familiarity, there are three “types” of BOSH (if you will):

  • microBOSH
  • BOSH
  • BOSH Lite

BOSH is used to deploy Cloud Foundry and microBOSH is used to deploy BOSH. BOSH Lite is used for local instances of Cloud Foundry – for example on a laptop like I’m doing.

The instructions for installing BOSH Lite are available on the BOSH Lite README. I followed the instructions for Vagrant and VirtualBox.

BOSH Lite install failure: nokogiri

When I first tried to install BOSH Lite with gem install bosh_cli the installation failed because it needed nokogiri:

...
Fetching: nokogiri-1.6.5.gem (100%)
Building native extensions.  This could take a while...
ERROR:  Error installing bosh_cli:
  ERROR: Failed to build gem native extension.
...

I had actually run into this issue before, on another Macbook running OS 10.9.x. I was only able to install nokogiri using the Xcode CLI tools:

$ xcode-select --install
xcode-select: note: install requested for command line developer tools

$ gem install nokogiri -- --with-xml2-include=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/libxml2
Building native extensions with: '--with-xml2-include=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/libxml2'
This could take a while...
Successfully installed nokogiri-1.6.5
Parsing documentation for nokogiri-1.6.5
Installing ri documentation for nokogiri-1.6.5
Done installing documentation for nokogiri after 3 seconds
1 gem installed

(The Xcode CLI tools install will pop up a software agreement via the App Store that you must agree to in order to install the software. The installation/updates for the Xcode CLI tools will subsequently be handled in the App Store.)

Once nokogiri was installed, I was able to install the BOSH CLI tools without difficulty.

The first time you start the VM, Vagrant will use the Vagrantfile in the BOSH Lite directory to install/create the VM. You should see something similar to the following:

$ vagrant up --provider=virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'cloudfoundry/bosh-lite' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: 388
==> default: Loading metadata for box 'cloudfoundry/bosh-lite'
    default: URL: https://vagrantcloud.com/cloudfoundry/bosh-lite
==> default: Adding box 'cloudfoundry/bosh-lite' (v388) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/cloudfoundry/boxes/bosh-lite/versions/388/providers/virtualbox.box
==> default: Successfully added box 'cloudfoundry/bosh-lite' (v388) for 'virtualbox'!
==> default: Importing base box 'cloudfoundry/bosh-lite'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'cloudfoundry/bosh-lite' is up to date...
==> default: Setting the name of the VM: bosh-lite_default_1418694875085_89186
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
    default: /vagrant => /Users/quinn/Development/BOSHlite/bosh-lite

You should then be able to target the BOSH Lite director and update your routing table:

$ bosh target 192.168.50.4 lite
Target set to `Bosh Lite Director'
Your username: admin
Enter password: *****
Logged in as `admin'

$ bin/add-route
Adding the following route entry to your local route table to enable direct warden container access. Your sudo password may be required.
  - net 10.244.0.0/19 via 192.168.50.4
Password:
add net 10.244.0.0: gateway 192.168.50.4

Note: when I did this install on another laptop running OS 10.9.x, I ran into an issue where the route script could not run and terminated with the error route: command not found. Turns out, somehow my PATH variable had become foobar-ed (technical term) at some point. Fixing my PATH variable resolved the issue.

Deploying Cloud Foundry

Now we’re going to hop over to the Cloud Foundry install instructions. Using the install script:

  • Install spiff (requires Homebrew)
  • Clone the cf-release repo
  • Run ./bin/provision_cf

Install script failure: timeout

The only real issue I encountered during the installation is that when I ran the script, I encountered the following error a few times (which halted the install):

Blobstore error: Failed to fetch object, underlying error: #<HTTPClient::ReceiveTimeoutError: execution expired>

After a quick search I found that people just restarted the install and it would complete. The install does not redownload packages that have already been downloaded, it just flags that there are already local copies until it gets to the packages it hasn’t downloaded yet.

For reference, I had to run the script a total of three times before all the packages were downloaded successfully. I did not encounter any other issues with my installation.

Installing the CF CLI tools

Personally, I installed the CF CLI tools using the latest binary installer (linked on the README). Currently the development team is also experimenting with a Homebrew install, but that is still in the experimental phase at the time of this writing.

As a quick verification that the CLI tools are installed correctly, try checking the version. You should see something similar to the following:

$ cf --version
cf version 6.7.0-c38c991-2014-11-12T01:45:23+00:00

Creating the initial Org and Space

Targeting the API and logging in is the same procedure that you would use with a PaaS provider (e.g. PWS) or TryCF (requires AWS):

$ cf api --skip-ssl-validation https://api.10.244.0.34.xip.io 
Setting api endpoint to https://api.10.244.0.34.xip.io...
OK


API endpoint:   https://api.10.244.0.34.xip.io (API version: 2.18.0)
Not logged in. Use 'cf login' to log in.

$ cf login
API endpoint: https://api.10.244.0.34.xip.io

Email> admin

Password>
Authenticating...
OK

Select an org (or press enter to skip):

Org>



API endpoint:   https://api.10.244.0.34.xip.io (API version: 2.18.0)
User:           admin
No org or space targeted, use 'cf target -o ORG -s SPACE'

At first you won’t have any orgs (or spaces):

$ cf orgs
Getting orgs as admin...

name
No orgs found

This is different from service providers like PWS and TryCF, both of which have an org and a development space when they are created. Although orgs and spaces will be discussed in greater detail separately, please go ahead and create an org and at least one space. Follow the output suggestions to target the org and space.

$ cf create-org quinn
Creating org quinn as admin...
OK
TIP: Use 'cf target -o quinn' to target new org

$ cf target -o quinn
API endpoint:   https://api.10.244.0.34.xip.io (API version: 2.18.0)
User:           admin
Org:            quinn
Space:          No space targeted, use 'cf target -s SPACE'

$ cf create-space development
Creating space development in org quinn as admin...
OK
Assigning role SpaceManager to user admin in org quinn / space development as admin...
OK
Assigning role SpaceDeveloper to user admin in org quinn / space development as admin...
OK
TIP: Use 'cf target -o quinn -s development' to target new space

$ cf target -o quinn -s development
API endpoint:   https://api.10.244.0.34.xip.io (API version: 2.18.0)
User:           admin
Org:            quinn
Space:          development

[From my work blog entry here.]

Code spelunking to build a CF Plugin — 5-December-2014

Code spelunking to build a CF Plugin

This is a quick “how to” for how Long found the information he needed to build the cf info plugin.

Following a trail of breadcrumbs

Determine the requirements:

  1. Print the currently targeted org and space
  2. Print the API version and endpoint
  3. Print the user id of the current user (similar to whoami in *nix OSes).

Fulfilling the requirements:

This information comes from cf target, so first we’ll take a look in target.go.

  • Line 83 in target.go prints the current session information as reported from ui.go.
  • Line 196 in ui.go references a UserEmail method.
  • Line 175 of ui.go shows that config is actually from the core_config package.
  • There are a few *.go files in the core_config package, but searching this repository for UserEmail shows that the method is defined in config_repository.go on lines 199-204.
  • UserEmail requires a struct, called ConfigRepository.
  • ConfigRepository is built from the NewRepositoryFromPersistor method. (You can tell this both by the return &amp;ConfigRepository on line 23 or by noting that NewRepositoryFromPersistor returns all the fields needed by the struct – i.e. data, mutex, initOnce, persistor, and onError.)
  • NewRepositoryFromPersistor is returned from the method above it, NewRepositoryFromFilepath.

How to get the file path? Searching for CF_HOME (environmental variable storing the home directory) shows that it is defined in config_helpers.go. Now what to do with that information:

  • The DefaultFilePath is defined using either $CF_HOME/.cf or $HOME/.cf ($HOME is the user’s home directory in Unix environments).
    • As a sanity check, if you are using a Unix environment run echo $CF_HOME. If it is non-empty, then run less $CF_HOME/.cf/config.json. If it is empty, run less $HOME/.cf/config.json. You should see the configuration file that cf target uses to report the org, space, and API information (there is other information in there as well).
  • Recall that UserEmail is defined in lines 199-204 of config_repository.go. You can see that config_repository.go also has the other information we wish to pull – the org (OrganizationFields), space (SpaceFields), API endpoint (ApiEndpoint), and API version (ApiVersion).

What to do with all of that

Basically everything in reverse. You supply the default file path, which is config.json, to NewRepositoryFromFilepath. config.json has the org, space, API endpoint, API version, and Access Token. The existing cf code decodes the Access Token and the UserEmail method extracts the email address from that information. Everything is just pulled directly from the repository you just defined.

Addendum: What is going on to get that email anyway?

Let’s take a quick look in config.json. The 7th line should be the Access Token, which should look similar to this:

"AccessToken": "bearer eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI3NzI2ZDE2MS1hNWUzLTQwZjMtYTEzYy00OTlmMDNjOTBhZGIiLCJzdWIiOiI2Y2I5MTA0Yy1hZTMxLTQxYTMtOGQ4MS1jYjUxZjg0MTk5ZTMiLCJzY29wZSI6WyJjbG91ZF9jb250cm9sbGVyLmFkbWluIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwiY2xvdWRfY29udHJvbGxlci53cml0ZSIsIm9wZW5pZCIsInBhc3N3b3JkLndyaXRlIiwic2NpbS5yZWFkIiwic2NpbS53cml0ZSJdLCJjbGllbnRfaWQiOiJjZiIsImNpZCI6ImNmIiwiZ3JhbnRfdHlwZSI6InBhc3N3b3JkIiwidXNlcl9pZCI6IjZjYjkxMDRjLWFlMzEtNDFhMy04ZDgxLWNiNTFmODQxOTllMyIsInVzZXJfbmFtZSI6ImFkbWluIiwiZW1haWwiOiJhZG1pbiIsImlhdCI6MTQxNzgxMTU3NywiZXhwIjoxNDE3ODEyMTc3LCJpc3MiOiJodHRwczovL3VhYS41NC4xNzQuMjI5LjExOS54aXAuaW8vb2F1dGgvdG9rZW4iLCJhdWQiOlsic2NpbSIsIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ.XNaYq8rxpvwWx9kySIDqbKs0BuyeOMMwAPb5YQaT-9MIyr3YalCE_2gTg-fl0xulj4u-VoNme3OGZ2T3tFFUfBKgo3U7R_pl5OpcaetKslbvKtYpne7N30KMQySMqVVVooGqlReoI_n5m5O7ZIASiG8P1QtwuVrZPkPhbjsGfBE",

Source: The above is the access token from a now-destroyed TryCF instance.

This is a JSON Web Token (JWT). You can read about JWT here. For now, all I really care about is that its structure is <header>.<claims>.<signature>. The <claims> section of the above token is between the two periods:

eyJqdGkiOiI3NzI2ZDE2MS1hNWUzLTQwZjMtYTEzYy00OTlmMDNjOTBhZGIiLCJzdWIiOiI2Y2I5MTA0Yy1hZTMxLTQxYTMtOGQ4MS1jYjUxZjg0MTk5ZTMiLCJzY29wZSI6WyJjbG91ZF9jb250cm9sbGVyLmFkbWluIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwiY2xvdWRfY29udHJvbGxlci53cml0ZSIsIm9wZW5pZCIsInBhc3N3b3JkLndyaXRlIiwic2NpbS5yZWFkIiwic2NpbS53cml0ZSJdLCJjbGllbnRfaWQiOiJjZiIsImNpZCI6ImNmIiwiZ3JhbnRfdHlwZSI6InBhc3N3b3JkIiwidXNlcl9pZCI6IjZjYjkxMDRjLWFlMzEtNDFhMy04ZDgxLWNiNTFmODQxOTllMyIsInVzZXJfbmFtZSI6ImFkbWluIiwiZW1haWwiOiJhZG1pbiIsImlhdCI6MTQxNzgxMTU3NywiZXhwIjoxNDE3ODEyMTc3LCJpc3MiOiJodHRwczovL3VhYS41NC4xNzQuMjI5LjExOS54aXAuaW8vb2F1dGgvdG9rZW4iLCJhdWQiOlsic2NpbSIsIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ

If you paste that information into Base 64 Decode, you can see the user’s credentials – including the email field. (For TryCF the user_name and email are both admin.) If you paste in the above and decode it, you will see:

{"jti":"7726d161-a5e3-40f3-a13c-499f03c90adb","sub":"6cb9104c-ae31-41a3-8d81-cb51f84199e3","scope":["cloud_controller.admin","cloud_controller.read","cloud_controller.write","openid","password.write","scim.read","scim.write"],"client_id":"cf","cid":"cf","grant_type":"password","user_id":"6cb9104c-ae31-41a3-8d81-cb51f84199e3","user_name":"admin","email":"admin","iat":1417811577,"exp":1417812177,"iss":"https://uaa.54.174.229.119.xip.io/oauth/token","aud":["scim","openid","cloud_controller","password"]}

Feel free to try it with your own Access Token! :)

[From my work blog here.]

PS – It’s not a Wednesday, but most of my work related blog posts are going to be Wednesday so to keep it all together…

How To Install Go on Digital Ocean with a CentOS7 Droplet — 3-December-2014

How To Install Go on Digital Ocean with a CentOS7 Droplet

I will mostly be following the instructions from here, but instead of Ubuntu I am going to try CentOS.

Getting Started

All you will need to start, aside from the obvious internet connection, is a Digital Ocean account. I’ve found that the $5/mo. plan is really good for learning.

Spin up your droplet

This is actually pretty straightforward. Your regional preferences may differ, but here are what my settings looked like:

You’ll immediately see a progress bar to show you how quickly the droplet is being started:

Actually, it took less than 60 seconds total to completely spin that up. This is my first experience with Digital Ocean, so I’m pretty impressed with that.

Once the droplet is up and running, you will receive an email similar to this one, that will include your login credentials:


Yes, that is Google Inbox. It’s shiny, right?

Use these to SSH into your new droplet:

$ ssh digital-ocean
The authenticity of host '[address] ([address])' can't be established.
RSA key fingerprint is [key].
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[address]' (RSA) to the list of known hosts.
root@[address]'s password:
You are required to change your password immediately (root enforced)
Changing password for root.
(current) UNIX password:
New password:
Retype new password:
[root@centos-dev ~]#

(Note that I updated my ~/.ssh/config file to include the hostname and user name for this droplet and provided it the alias digital-ocean. If you do not do this, then you can simply SSH into the address provided in the credentials email.)

Create a new user

According to the Ubuntu instructions provided, the next step is to create a new user account. Pretty straightforward, but since the commands are a little different for CentOS7, I am using Digital Ocean’s Initial Server Setup with CentOS 7 instructions. Here I am creating a new user, providing that user with superuser privileges, and changing accounts into the new user account:

[root@centos-dev ~]# adduser quinn
[root@centos-dev ~]# passwd quinn
Changing password for user quinn.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@centos-dev ~]# gpasswd -a quinn wheel
Adding user quinn to group wheel
[root@centos-dev ~]# su quinn
[quinn@centos-dev root]$

To create your own user, simply replace quinn with your desired username.

Installing Go

The Ubuntu instructions use apt-get for this, but CentOS does not have apt-get available. But we’re in luck! We can just run sudo yum install golang (or run the command as root) to install the latest release of Go (yes, there is a lot of output):

[quinn@centos-dev root]$ sudo yum install golang

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for quinn:
Loaded plugins: fastestmirror
base                                                     | 3.6 kB     00:00
extras                                                   | 3.4 kB     00:00
updates                                                  | 3.4 kB     00:00
(1/4): extras/7/x86_64/primary_db                          |  35 kB   00:00
(2/4): base/7/x86_64/group_gz                              | 157 kB   00:00
(3/4): updates/7/x86_64/primary_db                         | 4.8 MB   00:01
(4/4): base/7/x86_64/primary_db                            | 4.9 MB   00:01
Determining fastest mirrors
 * base: mirror.ash.fastserv.com
 * extras: mirrors.advancedhosters.com
 * updates: mirrors.lga7.us.voxel.net
Resolving Dependencies
--> Running transaction check
---> Package golang.x86_64 0:1.3.3-1.el7.centos will be installed
--> Processing Dependency: golang-src for package: golang-1.3.3-1.el7.centos.x86_64
--> Processing Dependency: golang-bin for package: golang-1.3.3-1.el7.centos.x86_64
--> Running transaction check
---> Package golang-pkg-bin-linux-amd64.x86_64 0:1.3.3-1.el7.centos will be installed
--> Processing Dependency: golang-pkg-linux-amd64 = 1.3.3-1.el7.centos for package: golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_64
--> Processing Dependency: golang-pkg-linux-amd64 = 1.3.3-1.el7.centos for package: golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_64
--> Processing Dependency: gcc for package: golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_64
---> Package golang-src.noarch 0:1.3.3-1.el7.centos will be installed
--> Running transaction check
---> Package gcc.x86_64 0:4.8.2-16.2.el7_0 will be installed
--> Processing Dependency: cpp = 4.8.2-16.2.el7_0 for package: gcc-4.8.2-16.2.el7_0.x86_64
--> Processing Dependency: glibc-devel >= 2.2.90-12 for package: gcc-4.8.2-16.2.el7_0.x86_64
--> Processing Dependency: libmpfr.so.4()(64bit) for package: gcc-4.8.2-16.2.el7_0.x86_64
--> Processing Dependency: libmpc.so.3()(64bit) for package: gcc-4.8.2-16.2.el7_0.x86_64
---> Package golang-pkg-linux-amd64.noarch 0:1.3.3-1.el7.centos will be installed
--> Running transaction check
---> Package cpp.x86_64 0:4.8.2-16.2.el7_0 will be installed
---> Package glibc-devel.x86_64 0:2.17-55.el7_0.1 will be installed
--> Processing Dependency: glibc-headers = 2.17-55.el7_0.1 for package: glibc-devel-2.17-55.el7_0.1.x86_64
--> Processing Dependency: glibc-headers for package: glibc-devel-2.17-55.el7_0.1.x86_64
---> Package libmpc.x86_64 0:1.0.1-3.el7 will be installed
---> Package mpfr.x86_64 0:3.1.1-4.el7 will be installed
--> Running transaction check
---> Package glibc-headers.x86_64 0:2.17-55.el7_0.1 will be installed
--> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.17-55.el7_0.1.x86_64
--> Processing Dependency: kernel-headers for package: glibc-headers-2.17-55.el7_0.1.x86_64
--> Running transaction check
---> Package kernel-headers.x86_64 0:3.10.0-123.9.3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                       Arch      Version               Repository  Size
================================================================================
Installing:
 golang                        x86_64    1.3.3-1.el7.centos    extras     2.6 M
Installing for dependencies:
 cpp                           x86_64    4.8.2-16.2.el7_0      updates    5.9 M
 gcc                           x86_64    4.8.2-16.2.el7_0      updates     16 M
 glibc-devel                   x86_64    2.17-55.el7_0.1       updates    1.0 M
 glibc-headers                 x86_64    2.17-55.el7_0.1       updates    650 k
 golang-pkg-bin-linux-amd64    x86_64    1.3.3-1.el7.centos    extras      11 M
 golang-pkg-linux-amd64        noarch    1.3.3-1.el7.centos    extras     6.6 M
 golang-src                    noarch    1.3.3-1.el7.centos    extras     5.5 M
 kernel-headers                x86_64    3.10.0-123.9.3.el7    updates    1.4 M
 libmpc                        x86_64    1.0.1-3.el7           base        51 k
 mpfr                          x86_64    3.1.1-4.el7           base       203 k

Transaction Summary
================================================================================
Install  1 Package (+10 Dependent packages)

Total download size: 51 M
Installed size: 181 M
Is this ok [y/d/N]: y
Downloading packages:
(1/11): cpp-4.8.2-16.2.el7_0.x86_64.rpm                    | 5.9 MB   00:01
(2/11): gcc-4.8.2-16.2.el7_0.x86_64.rpm                    |  16 MB   00:02
(3/11): glibc-devel-2.17-55.el7_0.1.x86_64.rpm             | 1.0 MB   00:01
(4/11): glibc-headers-2.17-55.el7_0.1.x86_64.rpm           | 650 kB   00:00
(5/11): kernel-headers-3.10.0-123.9.3.el7.x86_64.rpm       | 1.4 MB   00:00
(6/11): golang-1.3.3-1.el7.centos.x86_64.rpm               | 2.6 MB   00:00
(7/11): libmpc-1.0.1-3.el7.x86_64.rpm                      |  51 kB   00:00
(8/11): golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_ |  11 MB   00:01
(9/11): mpfr-3.1.1-4.el7.x86_64.rpm                        | 203 kB   00:00
(10/11): golang-src-1.3.3-1.el7.centos.noarch.rpm          | 5.5 MB   00:01
(11/11): golang-pkg-linux-amd64-1.3.3-1.el7.centos.noarch. | 6.6 MB   00:06
--------------------------------------------------------------------------------
Total                                              5.5 MB/s |  51 MB  00:09
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mpfr-3.1.1-4.el7.x86_64                                     1/11
  Installing : libmpc-1.0.1-3.el7.x86_64                                   2/11
  Installing : cpp-4.8.2-16.2.el7_0.x86_64                                 3/11
  Installing : kernel-headers-3.10.0-123.9.3.el7.x86_64                    4/11
  Installing : glibc-headers-2.17-55.el7_0.1.x86_64                        5/11
  Installing : glibc-devel-2.17-55.el7_0.1.x86_64                          6/11
  Installing : gcc-4.8.2-16.2.el7_0.x86_64                                 7/11
  Installing : golang-src-1.3.3-1.el7.centos.noarch                        8/11
  Installing : golang-pkg-linux-amd64-1.3.3-1.el7.centos.noarch            9/11
  Installing : golang-1.3.3-1.el7.centos.x86_64                           10/11
  Installing : golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_64       11/11
  Verifying  : cpp-4.8.2-16.2.el7_0.x86_64                                 1/11
  Verifying  : golang-pkg-bin-linux-amd64-1.3.3-1.el7.centos.x86_64        2/11
  Verifying  : golang-1.3.3-1.el7.centos.x86_64                            3/11
  Verifying  : gcc-4.8.2-16.2.el7_0.x86_64                                 4/11
  Verifying  : golang-src-1.3.3-1.el7.centos.noarch                        5/11
  Verifying  : kernel-headers-3.10.0-123.9.3.el7.x86_64                    6/11
  Verifying  : glibc-devel-2.17-55.el7_0.1.x86_64                          7/11
  Verifying  : mpfr-3.1.1-4.el7.x86_64                                     8/11
  Verifying  : glibc-headers-2.17-55.el7_0.1.x86_64                        9/11
  Verifying  : libmpc-1.0.1-3.el7.x86_64                                  10/11
  Verifying  : golang-pkg-linux-amd64-1.3.3-1.el7.centos.noarch           11/11

Installed:
  golang.x86_64 0:1.3.3-1.el7.centos

Dependency Installed:
  cpp.x86_64 0:4.8.2-16.2.el7_0
  gcc.x86_64 0:4.8.2-16.2.el7_0
  glibc-devel.x86_64 0:2.17-55.el7_0.1
  glibc-headers.x86_64 0:2.17-55.el7_0.1
  golang-pkg-bin-linux-amd64.x86_64 0:1.3.3-1.el7.centos
  golang-pkg-linux-amd64.noarch 0:1.3.3-1.el7.centos
  golang-src.noarch 0:1.3.3-1.el7.centos
  kernel-headers.x86_64 0:3.10.0-123.9.3.el7
  libmpc.x86_64 0:1.0.1-3.el7
  mpfr.x86_64 0:3.1.1-4.el7

Complete!

Now the moment of truth:

[quinn@centos-dev root]$ go version
go version go1.3.3 linux/amd64

Awesome.

Environmental Variables

If you check your environmental variables, you will see that references to Go are “mysteriously” missing:

[quinn@centos-dev root]$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
[quinn@centos-dev root]$ echo $GOPATH

[quinn@centos-dev root]$

In CentOS you have a ~/.bash_profile file that you can update to handle this. By default, it will look a little something like this:

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH

Update the file to include $GOPATH and make sure to append the bin subdirectory of your $GOPATH to your $PATH variable:

export GOPATH=$HOME/go
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$GOPATH/bin

After you source your ~/.bash_profile file, you should now have the appropriate values for these variables:

[quinn@centos-dev root]$ source ~/.bash_profile
[quinn@centos-dev root]$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/quinn/.local/bin:/home/quinn/bin:/home/quinn/go/bin
[quinn@centos-dev root]$ echo $GOPATH
/home/quinn/go

Excellent.

Note: You can also see all of your Go environment information using go env:

[quinn@centos-dev bin]$ go env
GOARCH="amd64"
GOBIN=""
GOCHAR="6"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/quinn/go"
GORACE=""
GOROOT="/usr/lib/golang"
GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"

Installing Git & Mercurial

In order to “get” revel (next step) we need both git and hg (Mercurial). To install, simply run:

[quinn@centos-dev ~]$ sudo yum install git
[quinn@centos-dev ~]$ sudo yum install hg

For brevity, I am not including the output. The install is actually very speedy (rock on Digital Ocean!), but there were a lot of dependencies in both cases.

Sidenote: How to determine if you need Git, Hg, or both
For me, it was a case of trial and error. When I attempted to “get” revel with out both Git and Hg, I received an error that I did not have the other package. i.e. When I only had git, I received the error go: missing Mercurial command.. Likewise, when I only had Hg, then I received the error go: missing Git command.

Installing EPEL

You will also need EPEL. To install, I followed these instructions.

[quinn@centos-dev ~]$ cd /tmp
[quinn@centos-dev tmp]$ sudo yum install wget
[quinn@centos-dev tmp]$ wget https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm

Again, for brevity, I have not included the install output.

Sidenote: yum install method
I also tried sudo yum install epel-release-7-2.noarch.rpm, but received the following message: No package epel-release-7-2.noarch.rpm available.

Installing Revel

To install Revel we need to use go get. Good thing we installed git and hg, eh?

[quinn@centos-dev tmp]$ go get github.com/revel/cmd/revel

(If all goes smoothly, you should have no output.)

Now for the moment of truth:

[quinn@centos-dev tmp]$ revel run github.com/revel/revel/samples/chat
~
~ revel! http://revel.github.io
~
2014/12/03 21:10:19 revel.go:326: Loaded module static
2014/12/03 21:10:19 revel.go:326: Loaded module testrunner
2014/12/03 21:10:19 run.go:57: Running chat (github.com/revel/revel/samples/chat) in dev mode
2014/12/03 21:10:19 harness.go:165: Listening on :9000

SUCCESS.

Addenda

  • go get uses the same type of version control system as the package that you are “getting”.
  • As a new user to CentOS I am still not 100% clear what EPEL provides for the revel install – however without it you will be plagued with errors.

This is a copy of my work blog post.

Design a site like this with WordPress.com
Get started