All posts by jeff

AWS for “Bootstrapping Microservices with Docker, Kubernetes, and Terraform”

The book Bootstrapping Microservices with Docker, Kubernetes, and Terraform by Ashley Davis does a good job of taking you through each building block incrementally, with a great amount of detail. You can buy it on Amazon or access through ACM via Skillsoft / Percipio. However, it is focused on Azure services. I was willing to go along with that in the interest of getting through the core learning faster, but I found myself trying to solve Azure issues. At that point following Azure steps wasn’t helping, but getting in the way. Since eventually I would need to figure out how to do all this on Amazon Web Services (AWS), I thought I would document the steps I took so that others who are reading that book could basically follow the author, but work on AWS.

So here we go!

Chapter 3

Chapter 3 is the first place the book has you use Azure. I didn’t actually have problems here, in that I did create the container registry on Azure and used it, but in the interest of making an all-AWS path for the book, we’ll discuss how to create the container on AWS.

You’ll need four things.

  • An AWS account
  • ‘aws’ command line on your workstation or VM
  • An Identity and Access Management (IAM) user
  • An Elastic Container Registry (ECR) repository

Amazon loves acronyms; their service names are ridiculously unhelpful, though these are less whimsical than many. The need for an account is probably obvious; AWS gives away free services just like Microsoft so you should be able to do this for no out-of-pocket costs. Even if you already have AWS, these are very low cost services at the scale we will be using for this exercise.

Create an Account

Head to https://aws.amazon.com with your favorite browser. The upper right button will either say “Create an AWS Account” or “Sign into the Console”, the latter if cookies are telling AWS that you have an account. You can click the “Sign in to the console” button and the resulting page has a create button, if you need a new account. Here you will create a “Root User” account to get started. Use one of your existing email accounts and it will send you a verification code to enter. After verifying you enter your root password.

The root user is the super user of your account and if it is compromised you could be out a great deal of money, so choose wisely. You will have to provide a credit card as well as plenty of other information, and answer two or three captchas. You can make that go away by adding a new identity with IAM for your administrative work, but I recommend you do that on your own after following the next section, where we create an identity for the rest of the work here. The root login is intentionally difficult to use so that you only do so when you really need to. Bare minimum, you should add multi-factor authentication (MFA) using a mobile phone app or text message to the root account. But that just adds, to security, you’ll still need to answer the captcha each time for a root account login.

Create an IAM user

If you aren’t already positioned at the IAM dashboard, click the “services” icon to the right of the AWS logo at the top which looks like this: , and select IAM if it’s visible. If it’s not visible, click ‘All Services’ and scroll through the alphabetically-ordered list to get to IAM, and click it.

Click “Users” on the left, then the “Add users” button upper right. For this exercise we’re going to create a new user with name “microservices” . The credential type in this case is “Access Key” for programmatic access. Hit the “next” button and it will have selected “add user to group” and allow you to create a group by clicking a button. Click it now.

For this exercise we will name the new group “container-admin” which will just have the permissions required to administer containers. As we need more functionality we’ll add more groups and add those groups to our user. In the filter box enter “container”, then click the box next to “AmazonEC2ContainerRegistryFullAccess” the “Create Group”, then “Next: Tags”, then “Next: Review” and finally “Create User”

Now you have to note the credentials for that user. You won’t get another chance, so I recommend downloading the csv, naming it something memorable like “microservices_user_aws_credentials.csv” and saving it in a secure location.

To create administrative users the process is the same, and you choose what services you really need for most of your work. The most critical one is administrative access to IAM, but for security purposes it might make sense to leave that to the root user.

Get the aws command line interface (CLI)

The Vagrant file didn’t work for me or I might have not bothered with this AWS exercise right now. The rest of this document assumes you will install natively on your workstation.

This varies by system, so I recommend you visit https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html and follow the instructions for your system

Create the Registry

You can create a new profile for command line access using ‘aws configure’. If you don’t already have a profile, you can create a new one by naming it on the command line such as

aws configure --profile microservices

By naming different profiles you can quickly switch between IAM users and even AWS accounts. As part of this configuration step, you can choose the region where your AWS services will run. For this exercise choose one near you from the list available on the drop down in your browser at the upper right just to the left of your login name. For example us-west-2 which is in Oregon. Setting your default region will save you some typing.

To choose which profile the AWS cli will use, set the environment variable AWS_PROFILE as in:

export AWS_PROFILE=microservices

In Ashley Davis’s book, we are doing the same thing he discusses in 3.9.1 called a “Private Container Registry”. The full command to do the equivalent in AWS at the command line now is:

aws ecr create-repository --repository-name bmdk1 --profile microservices --region us-west-2

That produces output that includes the URL you’ll need for the docker push command in AshleyDavis’s book. It looks like the following:

{
    "repository": {
        "repositoryArn": "arn:aws:ecr:us-west-2:#############:repository/bmdk1",
        "registryId": "##############",
        "repositoryName": "bmdk1",
        "repositoryUri": "##########.dkr.ecr.us-west-2.amazonaws.com/bmdk1",
        "createdAt": "2022-03-18T11:45:59-07:00",
        "imageTagMutability": "MUTABLE",
        "imageScanningConfiguration": {
            "scanOnPush": false
        },
        "encryptionConfiguration": {
            "encryptionType": "AES256"
        }
    }
}

What you need to note is the repositoryUri value. In the example output I replaced the numbers for my repository ID with ########.

The name bmdk1 is from Ashley’s book; it’s arbitrary but normally you would choose something intuitively meaningful.

Now we’re going to have docker authenticate to our new registry. The easiest way to get these commands exactly right is to look at the registry on the logged-in browser, and click the button labelled “View Push Commands”. These are easy to mess up and the resulting errors are not helpful. To use those if you have a special profile requires that you add the –profile to the docker login command. Here’s a working example with the Repository ID blocked out, the region specified in case there is no default, and the profile microservices assuming you created the profile with aws configure:

aws ecr get-login-password --region us-west-2 --profile microservices | docker login --username AWS --password-stdin ############.dkr.ecr.us-west-2.amazonaws.com

Note that the last parameter is the repositoryUri from the create command, but with the repository name removed from the end.

Next you need to build the docker image unless you’ve already done that

docker build . -t bmdk1

Then we tag the image so docker push knows what we mean

docker tag bmdk1:latest ############.dkr.ecr.us-west-2.amazonaws.com/bmdk1:latest

Finally docker can push to our new registry.

docker push #############.dkr.ecr.us-west-2.amazonaws.com/bmdk1:latest

So now you are caught up to the book, except at the end of 3.9.2 Ashley checks the pushed image in the GUI. You can do the same on AWS and the repositories and image views look like the following.

repository list
Image details

The next chapters 4 and 5 don’t need AWS so we’ll pick up at Chapter 6.

Chapter 6

The section 6.6 is mostly about working with the Azure CLI. We just went over the AWS cli in the previous section; so you can skip 6.6 in the book. We’ve already worked with the ecr subcommand. Azure’s subcommand for Kubernetes is ‘aks’; AWS calls it ‘eks’ for “Elastic Kubernetes Service” — AWS really likes the term ‘elastic’. AWS does not provide EKS version information at the CLI; it is covered in their online documentation. There is really nothing to do with EKS at the command line prior to creating a cluster, which we will do shortly using Terraform.

What we will do here to prepare is add EKS permission to our IAM user. Log into AWS as the root user or an admin if you set that up, and go to IAM. We’re going to create another group for admin access to EKS. Kubernetes is called K8S for short in some circles, so let’s create a new group called k8s-admin. Click on user groups on the left, then “Create Group”, then check the box next to your microservices user to add to this new group. I tried adding individual EKS permissions, but there was no obvious one for “All Permissions” so instead, select “create inline policy“, then enter this in the JSON tab

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "eks:*",
            "Resource": "*"
        }
    ]
}

The user also needs admin access to EC2. So repeat that create group exercise, adding a group called ec2-admin, and adding your user to that group, and giving it full EC2 access.

The user also needs admin access to IAM. Repeat again with iam-full-access.

Failing to do this will result in permission problems during terraform apply, but it may not be obvious what the issue is.

Terraform

We’re going to take a slight left turn and create a full-on K8S cluster with terraform in AWS, just so we verify we have all the permissions correct. Then we’lll come back to Ashley’s book.

Make sure you’ve installed Kubernetes so you have access to Kubectl. Next we’ll grab some starting configuration from HashiCorp. In the book, section 6.9.1, they are going to run terraform init. We are too, but let’s configure properly first. Run the following command in a directory separate from what you cloned for the book.

git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster

At this point you should follow the tutorial at https://learn.hashicorp.com/tutorials/terraform/eks using the clone you just pulled. The clone needs only one change – add your profile name in the vpc.tf file. Optionally change the region here from Terraform’s default as well, so the top might look like this:

variable "region" {
  default     = "us-west-2"
  description = "AWS region"
}

variable "profile" {
  default = "microservices"
  description = "AWS profile for use by Terraform "
}

provider "aws" {
  region = var.region
  profile = var.profile
}

You’ll need to add the –profile to the end of the update-kubeconfig command in the tutorial:

aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) --profile microservices

At the end of that side tutorial you will have a Kubernetes dashboard on your browser. But for ease of reference here is how you get the dashboard:

kubectl apply -f admin-user.yaml
kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl create token admin-user
kubectl proxy

Using the files as shown at https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md (the first file is admin-user.yaml and the second is dashboard-adminuser.yaml)

Then point your browser at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

You’ll be directed to a login page where you can supply the token you generated above.

Section 6.9 Creating the registry with Terraform

This one is actually shocking easy once you’ve done the previous section. Inspired by https://www.sysopsruntime.com/2021/07/10/create-elastic-container-registry-ecr-in-aws-with-terraform/, I created a new repository based on Ashley’s repo, but AWS-ized it.

Clone https://github.com/jpa57/bootstrapping-microservices-chapter-6.git and visit example-2 to use Terraform to create the flixtube container registry.

Section 6.10

Section 6.10 discusses variables, which we used to set our profile earlier. The best practice is to factor those variables into a small number of .tf files that the user will modify to suit their needs.

Section 6.11

This section is similar to what you did creating the cluster using files supplied by Hashicorp, and this chapter doesn’t help the AWS user. We have not replaced the example 3 code in the GitHub fork since the Hashicorp tutorial worked fine for our current needs. For Chapter 7, we will integrate the book assets with the Hashicorp assets for a one-stop shop of deploying microservices to the cluster.

Section 6.12

In 6.12.1 the book talks about Azure’s interaction with Kubernetes to connect them. The equivalent section in the Hashicorp tutorial for AWS shows this line one line command that connects AWS and K8S, in this case using an assumed profile called microservices:

aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) --profile microservices

The rest of 6.12 is applicable, but it similar to the Hashicorp tutorial. The kubectl command to create the dashboard is here for ease of copying:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

I stopped at the kubectl proxy command on aws.

Chapter 7

For this section, clone the forked repository into a new working directory

git clone https://github.com/jpa57/bootstrapping-microservices-chapter-7.git

For example 1, we will deploy MongoDB to a container in our cluster and verify it works with a database browser. We’ll review the files after we deploy and test. Note that the forked repository has a parallel directory to the author’s scripts directory, called aws. This allows easy comparison and updates of the content from the bae repository if needed.

cd bootstrapping-microservices-chapter-7/example-1/aws
terraform init
terraform apply
aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) --profile microservices
kubectl get services

That final kubectl command will give you the IP address of the MongoDB instance to connect to using your favorite database browser. As of this writing Robo 3T is now called Studio 3T

Example 2

For example 2, it’s really the same but the author has added rabbitmq.tf. After terraform appl, update-kubeconfig, and Kubectl get services, note the name of the rabbitmq management server and browse there on port 15672; login with guest/guest and you are on your way to example 3.

Example 3

A key to the port of example 3 is to get the secrets right that allow the images to be pulled from the repository. I found this reference helpful

https://stackoverflow.com/questions/62328683/how-to-retrieve-a-secret-in-terraform-from-aws-secret-manager

To use that I went into the AWS admin UI and created a secret of type “other” that included the account id, the user and the password. You basically construct JSON key/value pairs, then we can fetch the string and parse not JSON in the terraform files. The ARN of the secret is asked for at the start of terraform apply, unless the variable is supplied one the command line. Our front of the video-streaming.tf file deals with these secrets and pulls them into local variables.


data "aws_secretsmanager_secret" "secrets" {
  arn = var.secret_arn
}

data "aws_secretsmanager_secret_version" "current" {
  secret_id = data.aws_secretsmanager_secret.secrets.id
}

locals {
    service_name = "video-streaming"
    account_id = jsondecode(nonsensitive(data.aws_secretsmanager_secret_version.current.secret_string))["ACCOUNT_ID"]
    username =   jsondecode(nonsensitive(data.aws_secretsmanager_secret_version.current.secret_string))["USER_NAME"]
    password =   jsondecode(nonsensitive(data.aws_secretsmanager_secret_version.current.secret_string))["PASSWORD"]
    login_server = "${local.account_id}.dkr.ecr.${var.region}.amazonaws.com"
    image_tag = "${local.login_server}/${var.app_name}:${var.app_version}"
}

In reality we should have chosen AWS terms for these, because what gets stored in the secrets manage are the credentials for a programmatic identity in IAM. So the USER_NAME will be loaded with an “Access Key Id” and PASSWORD will be loaded with the “Secret Access Key”. These credentials must have the ability to push to and pull from ECR.

Even the account ID is hidden from source, though it is generally agreed that the account ID is not sensitive. As suggested by Ashley’s book, we supply variables on the command line, which presently will look like this with your own secret ARN:

terraform apply --var="app_version=1" --var="secret_arn=arn:aws:secretsmanager:us-west-2:1234567890:secret:Microservices-training-j4QbD9a" --auto-approve

TO make this example work I also needed to add more permissions.. The existing set of permission sis shown in this image

One f the things I learned the hard way is that AWS does not support the sessionAffinity of “ClientIP”. Removing it from the spec for the kubernetes_service, along with the selector, at least allocated an external IP address. However,

$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) --profile microservices

$ kubectl get services
NAME              TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
database          ClusterIP      172.20.141.21    <none>                                                                   27017/TCP      120m
kubernetes        ClusterIP      172.20.0.1       <none>                                                                   443/TCP        125m
rabbit            ClusterIP      172.20.120.60    <none>                                                                   5672/TCP       120m
video-streaming   LoadBalancer   172.20.129.112   a0423b605b5aa4863a28c180298444d1-567482012.us-west-2.elb.amazonaws.com   80:32697/TCP   122m

From which we should be able to go to http://a0423b605b5aa4863a28c180298444d1-567482012.us-west-2.elb.amazonaws.com/video and see the application running.

If you struggle like I did, it might be instructive to visit the log files in each service. In this example, we’ve deployed the database and rabbitMQ, but they aren’t actually doing anything. The video application is not calling them yet. In my case I was getting a disconnect because I had removed the selector statement in the kubernetes_service declaration. All the log files ended up telling me is that the video-streaming application was never being hit; that led me to review changes and found the missing statement.

To enable looking at log files in your pods, first apply this yaml file:

$ kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml

Then, find the names of your pods so you can ‘follow’ their log files.

$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
counter                           1/1     Running   0          20m
database-6d6ccf57fb-w8qf4         1/1     Running   0          126m
rabbit-f5cbc8fbf-gjw8s            1/1     Running   0          126m
video-streaming-97675cd9d-mknff   1/1     Running   0          126m

Now to tail the log file for the video service for example:

$ kubectl logs -f  pods/video-streaming-97675cd9d-mknff 


> video-streaming@1.0.0 start /usr/src/app
> node ./src/index.js

Microservice online.

Along the way I also added some debugging statements in video-streaming/src/index.js to verify port numbers, which then appeared in the log output, so it was quite helpful.

At this point the services are all deployed and running, though only the video service is doing anything useful. It takes more than 12 minutes to deploy this cluster from start to finish. It takes 3 minutes to destroy.

Chapter 7.7 – pipelines

I started to work on porting this to AWS with GitHub and perhaps Terraform Cloud for state storage… and where do you run ‘docker’? But after reading this reference:

https://stackoverflow.com/questions/59410652/how-to-use-a-docker-file-with-terraform

Particularly the answer by Martin Atkins, I decided that we should wait for Chapter 11 where Ashley said we will remove the ‘hackiness’ of doing everything in one step in Terraform. So I’ll leave this port for others who are interested in example-4, and am removing that from my GitHub repo.

Things I worked on for Chapter 7 example 4

These may be helpful for later work n Chapter 11. Stay tuned.

Since I’m more of a GitHub person, and the workflow for GitHub looks a bit easier that BitBucket for AWS, I’m going to cover doing this with GitHub and Terraform Cloud in lieu of BitBucket with S3 and DynamoDB, which I think is where you would end up otherwise, from this reference.

The first thing you might want to do is learn a bit about Terraform Cloud. Start here, and sign up for a free account here.

I thought I’d run the little demo here https://github.com/hashicorp/tfc-getting-started . You need to be on linux, I believe, to follow that. OS X barfed on the setup script. I used a Ubuntu 18.04 VM and installed terraform and jq, getting a warning message which I ignored:

[GFX1-]: glxtest: libEGL missing methods for GL test

I only followed it as far as the apply.

Then back on app.terraform.io, I created an organization for this project, and a workspace. Creating the workspace made me choose from various VCS providers including GitHub. That authenticated to GitHub and asked for permission to install Terraform there, and which repositories to include. I said all, but that was for that organization, not for this workspace. A workspace is 1:1 with a single repository.

Because the previous work stored secrets in AWS secrets manager, and that is helpful for manual execution, I decided to keep that structure and just store the ARN of the secret collection in Terraform’s variable set. So to follow along, click settings at the top, variable sets on the left, and create a new one. I called mine ‘Secrets’ and assume that I might store other secrets in here, but for now it’s just the ARN of the secrets stored earlier. By selecting the ‘sensitive’ button, people can’t read that secret, including yourself, from this interface; it’s write-only.

Chapter 9

References

https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd

But of all the references, this was the most complete: https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2

This is a work in progress.. more to come soon.

Leveraging C++ open source in rails apps

One thing that many software developers neglect is the build/buy decision.   Often it is more cost effective to buy an existing solution rather than build your own.   This can happen at the macro level as well – inside your custom-built solution, problems may be solved using libraries written by others.   If that library happens to be open source, all the better!

But what if the best solution out there is written in some other language?   I ran into this very scenario when I came across an ideal open source solution written in C++ that I wanted to use in my Rails application.   The solution implements a very complex data structure and support code, and would add zero value to my application by trying to do it from scratch.  There is nothing like that library in the Rails ecosystem. 

To the Rescue: Foreign Function Interface, FFI.  This is, in itself, a very impressive piece of work.  With just a few lines of code to bind the interfaces, you can literally call natively compiled code from Ruby.  

You add a gem to your Gemfile:

gem 'ffi'

Then develop a bridge between the rails and c++ spaces.

  1. Declare all external interfaces in a .h file for C++ to name/externalize them properly
  2. Use FFI’s DSL to declare the bridge specifics

The header file (simplified for presentation):

extern "C" {
  void addBlock(Index *idx, int64_t start_day, int64_t start_hour, int64_t end_day, int64_t end_hour, int64_t id);
  int64_t get_right(return_right_down* rrd);
  int64_t get_down(return_right_down* rrd);
  int64_t get_corner(return_right_down* rrd);
}

The DSL:

module SpecialLibrary
  extend FFI::Library

  ffi_lib 'c'
  ffi_lib_flags :now, :global
  ffi_lib './lib/special/libspecial_schedtool.o'

  attach_function :addBlock, [:pointer, :int, :int, :int, :int, :int], :void
  attach_function :get_right, [:pointer], :int
  attach_function :get_down, [:pointer], :int
  attach_function :get_corner, [:pointer], :int
end

The functions declared in the header file are implemented by you in a C++ file and compiled to a .o declared using the ffl_lib syntax in the DSL above; those functions call C++ library API entries in your leveraged library (libspecial in my case), which is built according to the supplier and installed on the system as a library.

If that’s not impressive enough, it works on Ubuntu and OSX.  The commands to do the C++ compilation of the interfaces is different, but the code itself needs no changes.   I put this little script together to build on either system:

#! /bin/bash
#
# Different commands on OSX v Linux
case `uname` in
'Darwin')
    g++ -std=c++0x -I/usr/local/include/ -L/usr/local/lib libspecial_schedtool.cpp -lspecialindex_c -lspecialindex -dynamiclib -o libspecial_schedtool.o
    ;;
'Linux')
    g++ -std=c++0x -fPIC -I/usr/local/include/ -L/usr/local/lib libspecial_schedtool.cpp -lspecialindex_c -lspecialindex -shared -o libspecial_schedtool.o
    ;;
*)
    Echo "Error: don't know how to compile on this platform"
esac


Honestly, my app would not exist today without this leverage.  

Bundler SSL failures using rails-assets.org

I recently discovered a cool way to use Bower assets in rails applications from these guys.   I want to give a great shout out to  who provide an example application you can clone that is a good learning example on how use Restangular with Rails.

That was the good news.  I have an OSX development machine running 10.10 that I did that work on and it worked great.   My laptop running 10.11 however, would fail to bundle the assets from rails-assets.org.   It was complaining about SSL certificates and I had no idea how to deal with that.  My configuration is rvm 1.27.0, Jruby 9.0.5.0, rails 3.2.22.2, JDK 1.8.

I found an easy test to run quickly to compare environments and test for the problem:

ruby -e 'require "net/http"; require "uri"; require "jruby-openssl"; puts "ruby: "+RUBY_VERSION; puts "openssl: "+OpenSSL::OPENSSL_VERSION;Net::HTTP.get(URI("https://rails-assets.org"))'

If that fails, but changing rails-assets to rubygems succeeds, then you probably have the same problem and can try my solution.  What I don’t quite understand is how that test fails on OSX 10.10, yet the bundle install succeeds.

When using Jruby, the certificates are stored in the JDK.   You can use part of a recommended solution to find out where they are:

rvm osx-ssl-certs status all

It will return something like:

Certificates for /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre/lib/security/cacerts: Up to date.

That cacerts file is a keystore.  You need to add the root certificate used by rails-assets.org to that keystore. There are probably different ways to do it, but Firefox allows you to export the certificate easily:

  1. Browse to rails-assets.org
  2. Click the lock icon, then the right arrow, more information,  view certificate, details
  3. Click the root certificate (DST Root CA X3)
  4. Click export, save to a file ending in .pem, with format X509 certificate.  In my example I saved to ~/Desktop/DSTRootCAX3.pem

Now put it in your keystore:

cd /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre/lib/security/
sudo keytool -import -trustcacerts -alias root -file ~/Desktop/DSTRootCAX3.pem -keystore cacerts

(Note that the default password for the keystore is ‘changeit’)

That import fixes the ruby one line test above for 10.10 and 10.11, for JDK 1.7 and 1.8.

Things that seemed good but did nothing:
Several solutions here.

  1. Try bypassing ssl by using http in your gemfile (it must redirect to SSL because this does not work).
  2. gem update –system. No apparent effect
  3. put ‘:ssl_verify_mode: 0‘ in your ~/.gemrc.   This has the effect of getting past the first level of certificate error, but fails installing specific bower assets.   Further, it isn’t a good solution because it’s disabling ssl.
  4. use rvm to install ssl before installing ruby:  Doesn’t work for Jruby — the build is done with maven and the option for the ssl directory is not supported.

This looked promising.  Unfortunately on my laptop, the “rvm osx-ssl-certs update” ends up corrupting the JDK keystore! Instead of a keystore, it puts a text file containing certificates in its place, and subsequent failures start complaining about an invalid keystore. You’ll need to reinstall the JDK or recover the certificate file from a backup. The other solutions in the page simply don’t work.

Persistence or Stubbornness

I’m attributing the motivation for this post to public radio, where I heard this recently:   http://nonsenseatwork.com/915-when-stubbornness-makes-nonsense-of-persistence/.  Persistence is often touted as a key attribute of a successful entrepreneur, and one of the attributes I  have remarkable evidence for possessing.   But maybe I’m just stubborn?

In an article for HBR, Muriel Maignan Wilkins wrote that you can manage your stubbornness by being open to new ideas and being able to admit you are wrong.   The best entrepreneurs I have seen are willing to consider other choices but also balance that with a need to stay on course — changing plans to chase the “next best idea” is not leadership, yet neither is following  a course assured to fail.  Most adults have likely had an experience where they were convinced something wouldn’t work, and then it did.   That’s likely because someone else believed, beyond the obvious reasons why it would fail, that it could work.

In my startup, RideGrid, we believed that people would ride in a car, one time, with an  non-professional driver they don’t know.   People said we were crazy.   You could talk them through the rationale, but safety  and trust were their first concerns.   That was 2007.   Uber has now launched Uber Pool, which has some similar characteristics to RideGrid.   Very few people have that safety fear today:

  • My mother has used Uber.
  • People I work with in Tijuana told me this week that Uber drivers are more trustworthy than taxi drivers — if you left your laptop in a Tijuana Taxi – it’s gone.  If you left it in an Uber car, you might get it back.

Good entrepreneurs analyze evidence and motivation, and can pivot when they believe they are wrong, but don’t shift at intuition before doing some analysis.  Jumping at  the intuition of others is entrepreneurial suicide.  The analysis doesn’t have to be quantitative, but must consider all  relevant variables and their interaction.  The entrepreneur also has to filter out distractions of analyzing too much.

Tolerance of ambiguity, self awareness, and the fearless pursuit of what might end up being a waste of time but you don’t think so….. that’s persistence, and there is nothing stubborn about that.

This post is also the subject of a Leadership Minute Video.