Chef – berkshelf lesson for dummies like me Ermahgerd!

I feel like some of the explanations on berkshelf on the internet are confusing.
So i felt like doing a small write up myself

berkshelf is pretty much a replacement for the “knife cookbook” command.
The big win with berkshelf is that it also resolves dependencies of a cookbook like apt or yum.
It reads a file called “Berksfile” for other cookbooks the current cookbook needs and what repositories to fetch them from and pulls them to your local system.

I will use the logstash cookbook at https://github.com/lusis/chef-logstash as an example
If you read the Berksfile at https://github.com/lusis/chef-logstash/blob/master/Berksfile
it will show you what other cookbooks the logstash cookbook needs

So in order to get going

gem install berkshelf
git clone git@github.com:lusis/chef-logstash.git
cd logstash
berks install
berks upload

That installed berkshelf, cloned the logstash cookbook, resolved dependencies for the logstash cookbook and uploaded logstash cookbook and its dependencies to your chef-server

Additionally berkshelf installs its configuration file at : ~/.berkshelf/config.json
You may need to edit some stuff there to match your ~/.chef/knife.rb file

Chef – Nagios Server quickstart

Clone the opscode cookbook

$ git clone git@github.com:opscode-cookbooks/nagios.git

Create some berkshelf dependency stuff to make your life easier
( I’m going to assume you have berkshelf installed, if not

gem install berkself

and read this http://berkshelf.com/ )

$ cd nagios
$ cat>Berksfile<<EOF
metadata
cookbook 'bluepill'
cookbook 'perl'
cookbook 'rsyslog'
cookbook 'nginx'
cookbook 'nginx_simplecgi'

group :test do
#  cookbook 'minitest-handler', git: "git://github.com/btm/minitest-handler-cookbook.git"
end

EOF

Pull in your dependencies using Berkshelf and upload it to your chef-server

$ berks install
$ berks upload

Create your data bag for your nagios admin user

$ knife data bag create users
$ openssl passwd -1 -salt '78hJASHDGuywelhfsdkiukshdkfusdhgfu' 'nagiosadmin'
"$1$78hJASHD$KlWqNTM0UXf/iM6imQ.9F1"
$ cat>nagiosadmin.json<<EOF
{
  "id": "nagiosadmin",
  "groups": "sysadmin",
  "htpasswd": "$1$78hJASHD$KlWqNTM0UXf/iM6imQ.9F1",
  "nagios": {
    "pager": "nagiosadmin_pager@example.com",
    "email": "nagiosadmin@example.com"
  }
}
EOF

Upload your nagiosadmin user to data bag on your chef-server

$ knife data bag from file users nagiosadmin.json

Create a chef role for “monitoring”

$ cat>monitoring.rb<<EOF
name "monitoring"
run_list %w[
  recipe[nagios::server]
]

default_attributes({
  :nagios => {
    :server => {
      ### START Install Verison and Method
      :install_method => "package",
      ### END Install Version and Method
      :service_name => "nagios3",
      :home => "/usr/lib/nagios3",
      :conf_dir => "/etc/nagios3",
      :config_dir => "/etc/nagios3/conf.d",
      :cache_dir => "/var/cache/nagios3",
      :state_dir => "/var/lib/nagios3",
      :run_dir => "/var/run/nagios3",
      :docroot => "/usr/share/nagios3/htdocs",
      :server_name => "nagios",
      :web_server => "apache"
    },
    :client => {
      :install_method => "package"
    },
    :server_auth_method => "htauth",
    :url => "nagios.mydomain.com"
  }
})
EOF

Upload the “monitoring” role to chef-server and then apply the role and run chef-client

$ knife role from file monitoring.rb

$ knife node run_list add nagios.mydomain.com -r "role[monitoring]"
$ knife ssh -a ipaddress name:nagios.mydomain.com "chef-client"

Edit your local system’s host file to point the domain to the ip of your server if you don’t have DNS

10.0.1.1   nagios.mydomain.com

login at
http://nagios.mydomain.com/nagios3
username/password = nagiosadmin

Add the nrpe configurations on your clients

Create the application cookbook for your custom nrpe service checks

$ knife cookbook create mydomain_nrpe
$ cd mydomain_nrpe/recipes
$ cat>default.rb<<EOF
#
# Cookbook Name:: monitoring
# Recipe:: base_monitoring
#
# Copyright 2013, Example Company, Inc.
#
# This recipe defines the necessary NRPE commands for base system monitoring
# in Example Company Inc's Chef environment.

include_recipe 'nagios::client'

# Check for high load.  This check defines warning levels and attributes
nagios_nrpecheck "check_load" do
  command "#{node['nagios']['plugin_dir']}/check_load"
  warning_condition "6"
  critical_condition "10"
  action :add
end

# Check all non-NFS/tmp-fs disks.
nagios_nrpecheck "check_all_disks" do
  command "#{node['nagios']['plugin_dir']}/check_disk"
  warning_condition "8%"
  critical_condition "5%"
  parameters "-A -x /dev/shm -X nfs -i /boot"
  action :add
end

# Check for excessive users.  This command relies on the service definition to
# define what the warning/critical levels and attributes are
nagios_nrpecheck "check_users" do
  command "#{node['nagios']['plugin_dir']}/check_users"
  action :add
end
EOF

Upload the cookbook

$ knife cookbook upload mydomain_nrpe

Add the recipe to the run list of a node you want the nrpe services installed to or just assign it to a role

$ knife node run_list add james.mydomain "recipe[mydomain_nrpe]"
$ knife ssh -a ipaddress -x root name:james.mydomain "chef-client"

Add services to your nagios server using data bag entires in “nagios_services” data bag

$ knife data bag create nagios_services
$ mkdir nagios_services
$ cd nagios_services
$ cat>ssh.json<<EOF
{
  "id": "ssh",
  "hostgroup_name": "linux",
  "command_line": "$USER1$/check_ssh $HOSTADDRESS$"
}
EOF
$ cat>pingme.json<EOF
{
"id": "pingme",
 "hostgroup_name": "linux",
 "use_existing_command": "check-host-alive"
}
EOF
$ wget https://raw.github.com/opscode-cookbooks/nagios/master/examples/nagios_services/users.json
$ wget https://raw.github.com/opscode-cookbooks/nagios/master/examples/nagios_services/load.json
$ wget https://raw.github.com/opscode-cookbooks/nagios/master/examples/nagios_services/all_disks.json

Ingest all the nagios json service files to chef-server and run chef-client on the nagios server

$ ls |while read i ; do knife data bag from file nagios_services $i ; done
$ knife ssh -a ipaddress -x root name:nagios.mydomain.com "chef-client"

Install a system that’s not managed by chef

$ knife data bag create nagios_unmanagedhosts
$ cat >my host.json<EOF
{
  "address": "myhost.mydomain.com",
  "hostgroups": ["linux"],
  "id": "myhost",
  "notifications": 0
}
EOF
$ knife data bag from file nagios_unmanagedhosts host.json
$ knife ssh -x root -a ipaddress name:nagios.mydomain.com "chef-client"

Ruby with the F5 BigIP API

Something i found kind of useful.
Original Instructions found on F5’s page:
https://devcentral.f5.com/tech-tips/articles/getting-started-with-ruby-and-icontrol

I assume you’re running ubuntu or debian

1. Install Ruby Gems

apt-get install ruby rubygems libopenssl-ruby

2. Download iControl
iControl Ruby Gem

3. Install iControl gem

gem install f5-icontrol-10.2.0.gem

Run one of the example files (located in /var/lib/gems/1.8/gems/f5-icontrol-10.2.0.a.1/examples/ if installed as ‘root’)

ruby get_version.rb <f5address> <username> <pass>
=> BIG-IP_v10.1.0

And Lastly Here’s a little script to grab the Down/Disabled/Active Pool Members of a given pool
This was constructed with the help of some of the example scripts that installed with the icontrol gem.
Example usage:

f5-pool-members.rb -b bigIPAddress -u user -p pass -n poolname
    -b bigip-address,                BIGIP Load balancer address
        --bigip-address
    -u, --bigip-user bigip-user      Username of BIGIP admin
    -p, --bigip-pass bigip-pass      Password of BIGIP admin
    -n, --pool-name pool-name        Name of pool
    -h, --help                       Display this screen
#!/usr/bin/env ruby
# == Synopsis
# f5-node-initiator - Quickly add nodes to pools
# == Usage
# f5-node-initiator [OPTIONS]
# -h, --help:
#    show help
#
# --bigip-address, -b [hostname]:
#    specify the destination BIG-IP for virtual and pool creation
#
# --bigip-user, -u [username]:
#    username for destination BIG-IP
#
# --bigip-pass, -p [password]:
#    password for destination BIG-IP
#
# --pool-name, -n [name]:
#    name of pool to add node to
#
# --node-definition, -d [ip address:port]:
#    definition for node being added to pool, example: 10.2.1.1:443

require 'rubygems'
require 'f5-icontrol'
require 'optparse'

bigip_address = ''
bigip_user = ''
bigip_pass = ''
pool_name = ''
node_address = ''
node_port = 80

# Current script's name
currentFile = File.basename(__FILE__)

# IF options undefined set to help option
if ARGV.empty?
  ARGV[0] = '-h'
end

 # This hash will hold all of the options
 opt = {}
 optparse = OptionParser.new do |opts|
   # Set a banner, displayed at the top of help screen
   opts.banner = "#{currentFile} -b bigIPAddress -u user -p pass -n poolname"

   # Define the options, and what they do
   opts.on( '-b', '--bigip-address bigip-address', 'BIGIP Load balancer address' ) do |x|
     bigip_address = x
   end
   opts.on( '-u', '--bigip-user bigip-user', 'Username of BIGIP admin' ) do |x|
     bigip_user = x
   end
   opts.on( '-p', '--bigip-pass bigip-pass', 'Password of BIGIP admin' ) do |x|
     bigip_pass = x
   end
   opts.on( '-n', '--pool-name pool-name', 'Name of pool' ) do |x|
     pool_name = x
   end
   #opts.on( '-d', '--node-definition node-definition', 'definition for node being added to pool, example: 10.2.1.1:443' ) do |x|
   #  node_definition = x
   #end

   # This displays the help screen
   opts.on( '-h', '--help', 'Display this screen' ) do
     puts opts
     exit 1
   end
 end


# Parse Command options
optparse.parse!

# Initiate SOAP RPC connection to BIG-IP
bigip = F5::IControl.new(bigip_address, bigip_user, bigip_pass, ['LocalLB.Pool']).get_interfaces

# Insure that target pool exists
unless bigip['LocalLB.Pool'].get_list.include? pool_name
  puts 'ERROR: target pool "' + pool_name +'" does not exist'
  exit 1
end

ActiveMembers = Array.new
DisabledMembers = Array.new
DownMembers = Array.new

bigip['LocalLB.Pool'].get_monitor_instance([ pool_name ])[0].collect do |pool_member1|
  puts
  node_addr = pool_member1['instance']['instance_definition']['ipport']['address'].to_s
  node_port = pool_member1['instance']['instance_definition']['ipport']['port'].to_s
 
  if pool_member1['instance_state'].to_s =~ /INSTANCE_STATE_DOWN/
    DownMembers.push node_addr
  elsif pool_member1['enabled_state'].to_s =~ /false/
    DisabledMembers.push node_addr
  else
    ActiveMembers.push node_addr
  end
  #puts "Node: #{node_addr}:#{node_port}"
  #puts "Node Health: #{pool_member1['instance_state']}"
  #puts "Enabled State: #{pool_member1['enabled_state']}"
end

puts "Poolname: " + pool_name
puts "=============== Unhealthy State Nodes ================"
DownMembers.each do |x|
  puts x
end
puts "=============== Disabled State Nodes ================"
DisabledMembers.each do |x|
  puts x
end
puts "=============== Active and Healthy State Nodes ================"
ActiveMembers.each do |x|
  puts x
end

Example Output:

Poolname: myf5_pool
=============== Unhealthy State Nodes ================
10.0.0.3
=============== Disabled State Nodes ================
10.0.0.4
=============== Active and Healthy State Nodes ================
10.0.0.2
10.0.0.1

cubism.js with graphite server

Recently i’d been looking for a better interface for displaying multiple server stats.
Low and behold i found one created by Square payments.
Here’s what this thing looks like

cubism_example

Their project and examples can be found here:
http://square.github.com/cubism/

I’m going to assume you have an apache server and graphite server running somewhere.

So here’s a small example of how to get this going and your metrics plotting.
It’s pretty much an html file with some javascript
Create an new html file in your apache docroot with the following information ( usually /var/www/ in most ubuntu systems )

1. Put in your page title and some css includes and js includes

Alot of this is going to pull the css and js source directly from Square’s page.
I would recommend you download these files and copy them locally for your own use and serving them from apache yourself.


<meta charset="utf-8" />
Cubism.js</pre>
<style>
@import url(//fonts.googleapis.com/css?family=Yanone+Kaffeesatz:400,700);
@import url(//square.github.com/cubism/style.css);
</style>
<div id="body">
<h2>Host01 Load Average</h2>
<div id="graphs"></div>
<script type="text/javascript" src="http://d3js.org/d3.v2.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/cubism.v1.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/highlight.min.js"></script>
<script type="text/javascript">
{font-family:arial,helvettica,sans-serif"}
</script>

2. Setup some cubism settings.

Set the time granularity with the “.step” setting
Set the number of metrics you want to display with the “.size” setting

<script type="text/javascript">
var context = cubism.context()
    .step( 1 * 60 * 1000 )   // 1 minute
    .size(960);  // 1 * 960 = 4 hours

Setup more cubism graphite settings

Set the address of your graphite webserver at the “context.graphite”
Set the height of each row of metric data at “.height”
Set the time shift of how many days you want to go back at “.shift”, like you want to see data from 7 days ago.

var graphite = context.graphite("http://graphite-server.foo-bar.net");
var horizon = context.horizon().metric(graphite.metric).height(100).shift( - 0 * 24 * 60 * 60 * 1000 );

Create a list of metrics you want to see in an array

var metrics = [
   'stats.host01.cpu.load.load',
   'stats.host02.cpu.load.load',
   'nonNegativeDerivative(stats.host02.network.eth0.interface_tx_bytes)'
]

Call d3 and apply a bunch of stuff to the div with the id=graphs

d3.select("#graphs").append("div")
    .attr("class", "axis")
    .call(context.axis().orient("top"));

d3.select("#graphs").append("div")
    .attr("class", "rule")
    .call(context.rule());

d3.select("#graphs").selectAll(".horizon")
    .data(metrics)
  .enter().append("div")
    .attr("class", "horizon")
    .call(horizon);
</script>

Put it all together

The fully constructed HTML file should look something like this


<meta charset="utf-8" />
Cubism.js</pre>
<style>
@import url(//fonts.googleapis.com/css?family=Yanone+Kaffeesatz:400,700);
@import url(//square.github.com/cubism/style.css);
</style>
<div id="body">
<h2>Host01 Load Average</h2>
<div id="graphs"></div>
<script type="text/javascript" src="http://d3js.org/d3.v2.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/cubism.v1.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/highlight.min.js"></script>
<script type="text/javascript">
{font-family:arial,helvettica,sans-serif"}
</script>

<script type="text/javascript">
var context = cubism.context()
    .step( 1 * 60 * 1000 )   // 1 minute
    .size(960);  // 1 * 960 = 4 hours

var graphite = context.graphite("http://graphite-server.foo-bar.net");
var horizon = context.horizon().metric(graphite.metric).height(100).shift( - 0 * 24 * 60 * 60 * 1000 );

var metrics = [
   'stats.host01.cpu.load.load',
   'stats.host02.cpu.load.load',
   'nonNegativeDerivative(stats.host02.network.eth0.interface_tx_bytes)'
]

d3.select("#graphs").append("div")
    .attr("class", "axis")
    .call(context.axis().orient("top"));

d3.select("#graphs").append("div")
    .attr("class", "rule")
    .call(context.rule());

d3.select("#graphs").selectAll(".horizon")
    .data(metrics)
  .enter().append("div")
    .attr("class", "horizon")
    .call(horizon);
</script>

A more complicated example using graphite.find function


<meta charset="utf-8" />
Cubism.js</pre>
<style>
@import url(//fonts.googleapis.com/css?family=Yanone+Kaffeesatz:400,700);
@import url(http://square.github.com/cubism/style.css);
</style>
<div id="body">
<h2>Host01 Load Average</h2>
<div id="graphs"></div>
<script type="text/javascript" src="http://d3js.org/d3.v2.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/cubism.v1.js"></script>
<script type="text/javascript" src="http://square.github.com/cubism/highlight.min.js"></script>
<script type="text/javascript">
{font-family:arial,helvettica,sans-serif"}
</script>

<script type="text/javascript">
var context = cubism.context()
    .step( 1 * 60 * 1000 )   // 1 minute
    .size(960);  // 1 * 960 = 4 hours

var graphite = context.graphite("http://graphite-server.foo-bar.net");
//////// Example: 'stats.host*.cpu.load.load'
graphFind = 'stats.host0*.network.eth*.interface_*_bytes'

// Set The Time Row on Top
d3.select("#graphs").append("div")
    .attr("class", "axis")
    .call(context.axis().orient("top"));

// Set the Vertical Line Bar
d3.select("#graphs").append("div")
    .attr("class", "rule")
    .call(context.rule());

graphite.find(graphFind, function(error, results) {

   // Map find results to array and set to graphite.metric object type
    var metrics = results.sort().map(function(i) {
      return graphite.metric(i);
      //// return it as a nonNegativeDerivative
      // return graphite.metric('nonNegativeDerivative('+i+')');
    });

   // loop through array and print stuff to "graphs" div and apply .height and .colors to object
   for (var i=0;i<metrics.length;i++){
    d3.select("#graphs").call(function(div) {
        div.append("div").selectAll(".horizon")
             .data([metrics[i]])
             .enter().append("div")
             .attr("class", "horizon")
            .call(context.horizon()
              .height(100)
              .colors(["#08519c","#3182bd","#6baed6","#bdd7e7","#bae4b3","#74c476","#31a354","#006d2c"])
            );
    });
   }
   // Set The Time Row on Bottom
   d3.select("#graphs").append("div")
       .attr("class", "axis")
       .call(context.axis().orient("bottom"));
});
</script>

Amazon Glacier with Ruby FOG

Create an Amazon user with the following IAM security policy and save their credentials

{
    "Statement":[{
        "Effect":"Allow",
        "Resource":[
            "*"],
        "Action":[
            "glacier:*"]}
    ]
}

Here’s some shitty code you can try
the “multipart_chunk_size” must be a power of 2 multiple of 1MB

#!/usr/bin/env ruby

require 'rubygems'
require 'fog'

glacier = Fog::AWS::Glacier.new(
    :aws_access_key_id => 'MYACCESSKEY',
    :aws_secret_access_key => 'MYSUPERSECRETKEY')

vault = glacier.vaults.create :id => 'myvault'
archive1 = vault.archives.create :body => File.new('MYFILE.tar.gz'), :multipart_chunk_size => 1024*1024, :description => "adding some archive BLAH"
puts archive1.inspect

The output should show you some info about your upload job

You’re going to want to store your job id so and the object related information for later incase you want to do a fetch of the object

</pre>
<Fog::AWS::Glacier::Archive
 id="dAisPrlq.......jTzjr64Xeg",
 description="adding some archive BLAH",
 body=#<File:MyFile.tar.gz>
 >
<pre>

Retreiving archives is a two step process
1. Create a job to pull the archive into a downloadable state ( archive-retrieval )
2. Pull down the bytes after the job is done and the archive is ready for download ( get archive-output )
References:
http://www.spacevatican.org/2012/9/4/using-glacier-with-fog/
http://blog.vuksan.com/2010/07/20/provision-to-cloud-in-5-minutes-using-fog/

Ruby – Using RVM to create your ruby jail

* This has only been tested with ubuntu 12.04 – you also already need gcc and ruby of some sort installed
These instructions allow you to run your own version of ruby and rubygems from your home folder

Download and install rvm
Set a couple of environment variables

bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer) 

echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"' >> ~/.bash_profile 
echo 'PATH=$PATH:$HOME/.rvm/usr/bin # Add RVM to PATH for scripting' >> ~/.bash_profile
. ~/.bash_profile

Install Ruby 1.9.3

rvm install 1.9.3
rvm use 1.9.3 --default

Install some gnu tools you need to install gems

wget ftp://ftp.gnu.org/gnu/m4/m4-1.4.16.tar.gz 
tar xzvf m4-1.4.16.tar.gz && cd m4-1.4.16/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/gperf/gperf-3.0.4.tar.gz
tar xzvf gperf-3.0.4.tar.gz
cd gperf-3.0.4/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://invisible-island.net/byacc/byacc.tar.gz
tar xzvf byacc.tar.gz
cd byacc-20121003/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/termcap/termcap-1.3.1.tar.gz
tar xzvf termcap-1.3.1.tar.gz
cd termcap-1.3.1/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/ncurses/ncurses-5.9.tar.gz
tar xzvf ncurses-5.9.tar.gz
cd ncurses-5.9/
./configure --prefix=$HOME/.rvm/usr CFLAGS=-fPIC
make && make install

wget ftp://ftp.gnu.org/gnu/texinfo/texinfo-4.13a.tar.gz
tar xzvf texinfo-4.13a.tar.gz
cd texinfo-4.13/
./configure --prefix=$HOME/.rvm/usr LDFLAGS=-L$HOME/.rvm/usr/lib CPPFLAGS=-I$HOME/.rvm/usr/include/ncurses
make && make install

Install some more tools you need to install gems
This time just use the ones that rvm has packaged
# ORDER MATTERS !!!

for i in curl zlib readline openssl iconv pkgconfig autoconf libxml2 libxslt libyaml ; do rvm pkg install $i --verify-downloads 1 --with-opt-dir=$HOME/.rvm/usr ; done

Reinstall ruby 1.9.3 with the new path of your tools compiled in

rvm reinstall 1.9.3 --with-opt-dir=$HOME/.rvm/usr

Install the ‘fog’ gem

gem install fog

Your home folder will now be 1.4GB large but you’ll have a self contained ruby and rubygems installation with the fog library available

Amazon EC2 – Clone system into AMI without reboot

This requires you to install the Amazon CLI API tools located at: http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
This also requires java installed
Api Reference here: http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-CreateImage.html

This is for the lazy people like myself to reference
This will snapshot all ebs volumes that are also attached to the system you’re creating an image from.
If you’re not pausing this instance when you take a snapshot you can get an inconsistent state, but if you can’t afford downtime then just do this and if it doesn’t work when you boot a new system do it again

james.tran$ ec2cim i-111a111 -n my_newami_name -d my_description --no-reboot --region us-west-1 -W AWSUSER_SECRET_KEY -O AWSUSER_ACCESS_KEY
IMAGE     ami-543106

Amazon EC2 – Snapshotting EBS script

I’ve stolen the example from here but i’ve made my own adjustments

1. Create an Amazon IAM user for snapshotting and save the credentials file. ( You’ll need them to use the amazon cli api , it comes in csv format)
Create an Amazon IAM Group for snapshot permissions
add a “Custom Policy” and paste the code block below
Example IAM Policy:

{
  "Statement": [
    {
      "Action": [
        "ec2:DescribeVolumes",
        "ec2:CreateSnapshot",
        "ec2:DeleteSnapshot",
        "ec2:DescribeSnapshotAttribute",
        "ec2:DescribeSnapshots",
        "ec2:ModifySnapshotAttribute",
        "ec2:ResetSnapshotAttribute"
      ],
      "Effect": "Allow",
      "Resource": [
        "*"
      ]
    }
  ]
}

2. Install the Amazon CLI Tools

$ wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
$ unzip ec2-api-tools.zip
$ mv ec2-api-tools /opt
$ ln -s /opt/ec2-api-tools /opt/aws

3. Install the below script in cron on your snapshot interval and change the “Constants” and also plugin your AWS Keys
Script Below

#!/bin/bash
# EBS Snapshot volume script
# Constants - You'll want to edit these
JAVA_HOME="/usr"
EC2_HOME="/opt/aws"
ec2_bin="/opt/aws/bin"
export EC2_HOME
export JAVA_HOME
LOGFILE='/var/log/aws_snapshot.log'
TMPFILE='/tmp/snap_info.txt'

VOLTMPFILE='/tmp/volume_info.txt'

# Retention in days
RETENTION="7"

# AWS ACCESS INFO
access_key='SOMEACCESSKEY'
secret_key='SOMESECRETKEY'
instance_id=`wget -q -O- http://169.254.169.254/latest/meta-data/instance-id`

# Dates
datecheck_7d=`date +%Y-%m-%d --date "$RETENTION days ago"`
datecheck_s_7d=`date --date="$datecheck_7d" +%s`
datenow=`date +%Y-%m-%d-%H:%M:%S`

# Add entry in logfile for run begin
echo "${datenow} ======= BEGIN SNAPSHOT SCRIPT =========" 2>&1 >> $LOGFILE
# Get all volume info and copy to temp file
$ec2_bin/ec2-describe-volumes -O $access_key -W $secret_key  --filter "attachment.instance-id=$instance_id" > $VOLTMPFILE 2>&1

# Get all snapshot info
$ec2_bin/ec2-describe-snapshots -O $access_key -W $secret_key | grep "$instance_id" > $TMPFILE 2>&1

# Loop to remove any snapshots older than 7 days
for obj0 in $(cat $TMPFILE | awk '{print $5}')
do
        snapshot_name=`cat $TMPFILE | grep "$obj0" | awk '{print $2}'`
        datecheck_old=`cat $TMPFILE | grep "$snapshot_name" | awk '{print $5}' | awk -F "T" '{print $1}'`
        datecheck_s_old=`date --date="$datecheck_old" +%s`

        # Check if snapshot is older than retention days
        if (( $datecheck_s_old <= $datecheck_s_7d ));
        then
                echo "deleting snapshot $snapshot_name ... older than $RETENTION days" 2>&1 >> $LOGFILE
                $ec2_bin/ec2-delete-snapshot -O $access_key -W $secret_key $snapshot_name
        else
                echo "not deleting snapshot $snapshot_name ... not older than $RETENTION days" 2>&1 >> $LOGFILE
        fi
done

# Create snapshot
for volume in $(cat $VOLTMPFILE | grep "VOLUME" | awk '{print $2}')
do
        # Description cannot have spaces
        description="instance-id:${instance_id}_vol-id:${volume}_`hostname`_backup-`date +%Y-%m-%d`"
        echo "Creating Snapshot for the volume: $volume with description: $description" 2>&1 >> $LOGFILE
        $ec2_bin/ec2-create-snapshot -O $access_key -W $secret_key -d "$description" $volume 2>&1 >> $LOGFILE
done

Ruby – regex example

I thought i might throw out some simple examples of using regexes with ruby for when i forget

command = `mpstat -P ALL`
regex = /(?<NAME0>load)\s+average:\s+(?<NAME1>\S+),\s+(?<NAME2>\S+),\s+(?<NAME3>\S+)/x
result = command.match(regex)

# Print your regex
puts " #{result['NAME0']} #{result['NAME1']} #{result['NAME2']} #{result['NAME3']}"
#or
puts " #{result[1]} #{result[2]} #{result[3]} #{result[4]}"

annndd… something more complicated in context of something else


#!/usr/bin/env ruby
require "getopt/long"
require 'socket'

opt = Getopt::Long.getopts(
     ["--server", "-s", Getopt::REQUIRED],
     ["--port", "-p", Getopt::REQUIRED],
     ["--environment", "-e", Getopt::REQUIRED]
)

unless opt["s"] and opt["p"] and opt["e"]
  unless opt["p"] =~ /\d+/
    currentFile = File.basename(__FILE__)
    puts "usage: ./#{currentFile} -s graphiteServer -p graphitePort -e siteEnvironment"
    puts "usage: ./#{currentFile} -s someserver -p 2003 -e dev"
    exit 1
  end
end

statprefix = 'stats'
hostname = `hostname`.chomp
command = `mpstat -P ALL`
epoch = (Time.now.to_i).to_s
graphiteServer = opt["s"]
graphitePort = opt["p"]
siteEnv = opt["e"]

regexTitles = /(?<TITLEID>CPU\s.*)/x
partsTitle = command.match(regexTitles)
partsTitle = partsTitle['TITLEID'].split

regex = /(?<CPUID>all.*)/x
parts = command.match(regex)
parts = parts['CPUID'].split

hash = Hash[partsTitle.zip(parts)]
sock = TCPSocket.new(graphiteServer, graphitePort)
hash.each_pair do |title,value|
  title = title.sub(/^\%/,"")
  sock.puts "#{statprefix}.#{siteEnv}.#{hostname}.cpu.all.#{title} #{value} #{epoch}"
end
sock.close