Chef – berkshelf lesson for dummies like me Ermahgerd!

I feel like some of the explanations on berkshelf on the internet are confusing.
So i felt like doing a small write up myself

berkshelf is pretty much a replacement for the “knife cookbook” command.
The big win with berkshelf is that it also resolves dependencies of a cookbook like apt or yum.
It reads a file called “Berksfile” for other cookbooks the current cookbook needs and what repositories to fetch them from and pulls them to your local system.

I will use the logstash cookbook at https://github.com/lusis/chef-logstash as an example
If you read the Berksfile at https://github.com/lusis/chef-logstash/blob/master/Berksfile
it will show you what other cookbooks the logstash cookbook needs

So in order to get going

gem install berkshelf
git clone git@github.com:lusis/chef-logstash.git
cd logstash
berks install
berks upload

That installed berkshelf, cloned the logstash cookbook, resolved dependencies for the logstash cookbook and uploaded logstash cookbook and its dependencies to your chef-server

Additionally berkshelf installs its configuration file at : ~/.berkshelf/config.json
You may need to edit some stuff there to match your ~/.chef/knife.rb file

Ruby with the F5 BigIP API

Something i found kind of useful.
Original Instructions found on F5’s page:
https://devcentral.f5.com/tech-tips/articles/getting-started-with-ruby-and-icontrol

I assume you’re running ubuntu or debian

1. Install Ruby Gems

apt-get install ruby rubygems libopenssl-ruby

2. Download iControl
iControl Ruby Gem

3. Install iControl gem

gem install f5-icontrol-10.2.0.gem

Run one of the example files (located in /var/lib/gems/1.8/gems/f5-icontrol-10.2.0.a.1/examples/ if installed as ‘root’)

ruby get_version.rb <f5address> <username> <pass>
=> BIG-IP_v10.1.0

And Lastly Here’s a little script to grab the Down/Disabled/Active Pool Members of a given pool
This was constructed with the help of some of the example scripts that installed with the icontrol gem.
Example usage:

f5-pool-members.rb -b bigIPAddress -u user -p pass -n poolname
    -b bigip-address,                BIGIP Load balancer address
        --bigip-address
    -u, --bigip-user bigip-user      Username of BIGIP admin
    -p, --bigip-pass bigip-pass      Password of BIGIP admin
    -n, --pool-name pool-name        Name of pool
    -h, --help                       Display this screen
#!/usr/bin/env ruby
# == Synopsis
# f5-node-initiator - Quickly add nodes to pools
# == Usage
# f5-node-initiator [OPTIONS]
# -h, --help:
#    show help
#
# --bigip-address, -b [hostname]:
#    specify the destination BIG-IP for virtual and pool creation
#
# --bigip-user, -u [username]:
#    username for destination BIG-IP
#
# --bigip-pass, -p [password]:
#    password for destination BIG-IP
#
# --pool-name, -n [name]:
#    name of pool to add node to
#
# --node-definition, -d [ip address:port]:
#    definition for node being added to pool, example: 10.2.1.1:443

require 'rubygems'
require 'f5-icontrol'
require 'optparse'

bigip_address = ''
bigip_user = ''
bigip_pass = ''
pool_name = ''
node_address = ''
node_port = 80

# Current script's name
currentFile = File.basename(__FILE__)

# IF options undefined set to help option
if ARGV.empty?
  ARGV[0] = '-h'
end

 # This hash will hold all of the options
 opt = {}
 optparse = OptionParser.new do |opts|
   # Set a banner, displayed at the top of help screen
   opts.banner = "#{currentFile} -b bigIPAddress -u user -p pass -n poolname"

   # Define the options, and what they do
   opts.on( '-b', '--bigip-address bigip-address', 'BIGIP Load balancer address' ) do |x|
     bigip_address = x
   end
   opts.on( '-u', '--bigip-user bigip-user', 'Username of BIGIP admin' ) do |x|
     bigip_user = x
   end
   opts.on( '-p', '--bigip-pass bigip-pass', 'Password of BIGIP admin' ) do |x|
     bigip_pass = x
   end
   opts.on( '-n', '--pool-name pool-name', 'Name of pool' ) do |x|
     pool_name = x
   end
   #opts.on( '-d', '--node-definition node-definition', 'definition for node being added to pool, example: 10.2.1.1:443' ) do |x|
   #  node_definition = x
   #end

   # This displays the help screen
   opts.on( '-h', '--help', 'Display this screen' ) do
     puts opts
     exit 1
   end
 end


# Parse Command options
optparse.parse!

# Initiate SOAP RPC connection to BIG-IP
bigip = F5::IControl.new(bigip_address, bigip_user, bigip_pass, ['LocalLB.Pool']).get_interfaces

# Insure that target pool exists
unless bigip['LocalLB.Pool'].get_list.include? pool_name
  puts 'ERROR: target pool "' + pool_name +'" does not exist'
  exit 1
end

ActiveMembers = Array.new
DisabledMembers = Array.new
DownMembers = Array.new

bigip['LocalLB.Pool'].get_monitor_instance([ pool_name ])[0].collect do |pool_member1|
  puts
  node_addr = pool_member1['instance']['instance_definition']['ipport']['address'].to_s
  node_port = pool_member1['instance']['instance_definition']['ipport']['port'].to_s
 
  if pool_member1['instance_state'].to_s =~ /INSTANCE_STATE_DOWN/
    DownMembers.push node_addr
  elsif pool_member1['enabled_state'].to_s =~ /false/
    DisabledMembers.push node_addr
  else
    ActiveMembers.push node_addr
  end
  #puts "Node: #{node_addr}:#{node_port}"
  #puts "Node Health: #{pool_member1['instance_state']}"
  #puts "Enabled State: #{pool_member1['enabled_state']}"
end

puts "Poolname: " + pool_name
puts "=============== Unhealthy State Nodes ================"
DownMembers.each do |x|
  puts x
end
puts "=============== Disabled State Nodes ================"
DisabledMembers.each do |x|
  puts x
end
puts "=============== Active and Healthy State Nodes ================"
ActiveMembers.each do |x|
  puts x
end

Example Output:

Poolname: myf5_pool
=============== Unhealthy State Nodes ================
10.0.0.3
=============== Disabled State Nodes ================
10.0.0.4
=============== Active and Healthy State Nodes ================
10.0.0.2
10.0.0.1

Ruby – Using RVM to create your ruby jail

* This has only been tested with ubuntu 12.04 – you also already need gcc and ruby of some sort installed
These instructions allow you to run your own version of ruby and rubygems from your home folder

Download and install rvm
Set a couple of environment variables

bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer) 

echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"' >> ~/.bash_profile 
echo 'PATH=$PATH:$HOME/.rvm/usr/bin # Add RVM to PATH for scripting' >> ~/.bash_profile
. ~/.bash_profile

Install Ruby 1.9.3

rvm install 1.9.3
rvm use 1.9.3 --default

Install some gnu tools you need to install gems

wget ftp://ftp.gnu.org/gnu/m4/m4-1.4.16.tar.gz 
tar xzvf m4-1.4.16.tar.gz && cd m4-1.4.16/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/gperf/gperf-3.0.4.tar.gz
tar xzvf gperf-3.0.4.tar.gz
cd gperf-3.0.4/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://invisible-island.net/byacc/byacc.tar.gz
tar xzvf byacc.tar.gz
cd byacc-20121003/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/termcap/termcap-1.3.1.tar.gz
tar xzvf termcap-1.3.1.tar.gz
cd termcap-1.3.1/
./configure --prefix=$HOME/.rvm/usr
make && make install

wget ftp://ftp.gnu.org/gnu/ncurses/ncurses-5.9.tar.gz
tar xzvf ncurses-5.9.tar.gz
cd ncurses-5.9/
./configure --prefix=$HOME/.rvm/usr CFLAGS=-fPIC
make && make install

wget ftp://ftp.gnu.org/gnu/texinfo/texinfo-4.13a.tar.gz
tar xzvf texinfo-4.13a.tar.gz
cd texinfo-4.13/
./configure --prefix=$HOME/.rvm/usr LDFLAGS=-L$HOME/.rvm/usr/lib CPPFLAGS=-I$HOME/.rvm/usr/include/ncurses
make && make install

Install some more tools you need to install gems
This time just use the ones that rvm has packaged
# ORDER MATTERS !!!

for i in curl zlib readline openssl iconv pkgconfig autoconf libxml2 libxslt libyaml ; do rvm pkg install $i --verify-downloads 1 --with-opt-dir=$HOME/.rvm/usr ; done

Reinstall ruby 1.9.3 with the new path of your tools compiled in

rvm reinstall 1.9.3 --with-opt-dir=$HOME/.rvm/usr

Install the ‘fog’ gem

gem install fog

Your home folder will now be 1.4GB large but you’ll have a self contained ruby and rubygems installation with the fog library available

Ruby – regex example

I thought i might throw out some simple examples of using regexes with ruby for when i forget

command = `mpstat -P ALL`
regex = /(?<NAME0>load)\s+average:\s+(?<NAME1>\S+),\s+(?<NAME2>\S+),\s+(?<NAME3>\S+)/x
result = command.match(regex)

# Print your regex
puts " #{result['NAME0']} #{result['NAME1']} #{result['NAME2']} #{result['NAME3']}"
#or
puts " #{result[1]} #{result[2]} #{result[3]} #{result[4]}"

annndd… something more complicated in context of something else


#!/usr/bin/env ruby
require "getopt/long"
require 'socket'

opt = Getopt::Long.getopts(
     ["--server", "-s", Getopt::REQUIRED],
     ["--port", "-p", Getopt::REQUIRED],
     ["--environment", "-e", Getopt::REQUIRED]
)

unless opt["s"] and opt["p"] and opt["e"]
  unless opt["p"] =~ /\d+/
    currentFile = File.basename(__FILE__)
    puts "usage: ./#{currentFile} -s graphiteServer -p graphitePort -e siteEnvironment"
    puts "usage: ./#{currentFile} -s someserver -p 2003 -e dev"
    exit 1
  end
end

statprefix = 'stats'
hostname = `hostname`.chomp
command = `mpstat -P ALL`
epoch = (Time.now.to_i).to_s
graphiteServer = opt["s"]
graphitePort = opt["p"]
siteEnv = opt["e"]

regexTitles = /(?<TITLEID>CPU\s.*)/x
partsTitle = command.match(regexTitles)
partsTitle = partsTitle['TITLEID'].split

regex = /(?<CPUID>all.*)/x
parts = command.match(regex)
parts = parts['CPUID'].split

hash = Hash[partsTitle.zip(parts)]
sock = TCPSocket.new(graphiteServer, graphitePort)
hash.each_pair do |title,value|
  title = title.sub(/^\%/,"")
  sock.puts "#{statprefix}.#{siteEnv}.#{hostname}.cpu.all.#{title} #{value} #{epoch}"
end
sock.close

Multiple AWS Accounts with Knife Admin

I Recently stumbled across a predicament of multiple aws accounts.
This is a minor predicament but a predicament nonethless.
I have a situation where i have

1. A personal AWS account

2. A work AWS account

3. A vendor AWS account

These three AWS accounts all use the same chef-server. So to make my life easier i decided to organize them.
I created the following structure:

$ mkdir -p ~/chef-aws/{personal,work,thirdparty}/.chef

I copied my knife.rb from ~/.chef/knife.rb into each of these folders.

$ cp -p ~/.chef/knife.rb ~/chef-aws/personal
$ cp -p ~/.chef/knife.rb ~/chef-aws/work
$ cp -p ~/.chef/knife.rb ~/chef-aws/thirdparty

Here’s an example of the knife.rb file
You can find details on setting up knife with ec2 here : Knife-EC2 Configuration


current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                "neosirex"
client_key               "/home/James/.chef/myuser.pem"
validation_client_name   "neosirex-validator"
validation_key           "/home/James/.chef/random-validator.pem"
chef_server_url          "https://api.opscode.com/organizations/somemakebelieveaccount"
cache_type               'BasicFile'
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
cookbook_path            ["#{current_dir}/../cookbooks"]

Here’s the snippet that’s added to each AWS specific knife.rb

knife[:aws_access_key_id] ='< AWS ACCESS KEY GOES HERE >'
knife[:aws_secret_access_key] ='< AWS SECRET KEY GOES HERE >'

So now in order to use different AWS accounts what i do is change into each of those aws directories and run knife commands from there.
Each of the following commands would give me the output only of the relevant AWS server

$ cd ~/chef-aws/personal && knife ec2 server list
$ cd ~/chef-aws/work && knife ec2 server list
$ cd ~/chef-aws/thirdparty && knife ec2 server list

I Leave my default ~/.chef/knife.rb file without AWS credentials in it.
This is because i don’t want to accidently deploy to the wrong AWS account.
There’s still room for human error but i suppose it’s better than nothing
If someone has a better approach to this i’d like to know about it.

Hash Keys and Values in General Ruby

Create a new Hash

node = Hash.new

Create a new Hash of Hash

node[:one] = Hash.new

Insert Values into your Hash of Hash

node[:one][:object] = "number1"
node[:one][:block] = "number2"

Print Key and Value

node[:one].each_pair do |k,v|
  puts k
  puts v
end

Print Keys only

node[:one].keys.each do |key|
  puts key
end

Print Values Only

node[:one].values.each do |value|
  puts value
end

Keys and Values from Hashes in Ruby Templates

Reference: http://ruby-doc.org/stdlib-1.9.3/libdoc/erb/rdoc/ERB.html
ERB recognizes certain tags in the provided template and converts them based on the rules below:

<% Ruby code -- inline with output %>
<%= Ruby expression -- replace with result %>
<%# comment -- ignored -- useful in testing %>
% a line of Ruby code -- treated as <% line %> (optional -- see ERB.new)
%% replaced with % if first thing on a line and % processing is used
<%% or %%> -- replace with <% or %> respectively

Create a new “motd” cookbook

$ knife cookbook create motd

Example: ~/cookbooks/motd/recipes/default.rb

#
# Cookbook Name:: motd
# Recipe:: default
#
# Copyright 2012, YOUR_COMPANY_NAME
#
# All rights reserved - Do Not Redistribute
#

template "/home/motd" do
  source "motd.erb"
  owner "root"
end

Example: ~/cookbooks/motd/template/default/motd.erb
* You can find out what hashes are defined on a system by running “ohai”

Hostname : <%= node["hostname"] %>
Platform : <%= node["platform"] %>

Memory Usage
<% node["memory"].each_pair do |k,v| %>
<%= k %>     : <%= v%>
<% end %>

Block Devices
<% node["block_device"].each_pair do |k,v| %>
Key: <%= k %>   -   <%= v%>
<% end %>

Network Info
<% node["network"]["interfaces"].keys.each do |k| %>
Key: <%= k %>
<% end %>

<% node["network"]["interfaces"].values.each do |v| %>
Value: <%= v %>
<% end %>

Run chef-client with the “motd” cookbook installed and look at the output at

$ cat /home/motd
Hostname: ubuntu01
Platform: ubuntu

Memory Usage
vmalloc_total     : 34359738367kB
anon_pages     : 129892kB
writeback     : 0kB
dirty     : 0kB
vmalloc_used     : 266104kB
vmalloc_chunk     : 34359469948kB
active     : 186264kB
buffers     : 31252kB
commit_limit     : 773540kB
nfs_unstable     : 0kB
slab_unreclaim     : 11684kB
bounce     : 0kB
slab_reclaimable     : 17644kB
mapped     : 11580kB
cached     : 190268kB
slab     : 29328kB
inactive     : 165136kB
free     : 105708kB
total     : 502612kB
committed_as     : 926460kB
page_tables     : 4324kB
swap     : cached0kBfree522236kBtotal522236kB

Block Devices
Key: sda   -   timeout30modelVMware Virtual Sremovable0vendorVMware,rev1.0size41943040staterunning
Key: sr0   -   timeout30modelVMware IDE CDR10removable1vendorNECVMWarrev1.00size1401432staterunning
Key: fd0   -   removable1size0
Key: loop7   -   removable0size0
Key: loop6   -   removable0size0
Key: loop5   -   removable0size0
Key: loop4   -   removable0size0
Key: loop3   -   removable0size0
Key: loop2   -   removable0size0
Key: loop1   -   removable0size0
Key: loop0   -   removable0size0
Key: ram15   -   removable0size131072
Key: ram14   -   removable0size131072
Key: ram13   -   removable0size131072
Key: ram12   -   removable0size131072
Key: ram11   -   removable0size131072
Key: ram10   -   removable0size131072
Key: ram9   -   removable0size131072
Key: ram8   -   removable0size131072
Key: ram7   -   removable0size131072
Key: ram6   -   removable0size131072
Key: ram5   -   removable0size131072
Key: ram4   -   removable0size131072
Key: ram3   -   removable0size131072
Key: ram2   -   removable0size131072
Key: ram1   -   removable0size131072
Key: ram0   -   removable0size131072

Network Info
Key: eth0
Key: lo

Value: mtu16436encapsulationLoopbackflagsLOOPBACKUPLOWER_U....< i truncated output >
Value: mtu1500typeethencapsulationEthernetflagsBROADCASTM....< i truncated output >

Also for shits and giggles
Example Ohai output:

root@ubuntu01:~# ohai
{
  "idletime": "35 minutes 35 seconds",
  "uptime": "36 minutes 47 seconds",
  "dmi": {
    "base_board": {
      "chassis_handle": "0x0000",
      "location_in_chassis": "Not Specified",
      "product_name": "440BX Desktop Reference Platform",
      "serial_number": "None",
      "manufacturer": "Intel Corporation",
      "version": "None",
      "type": "Unknown",
      "features": "None",
      "contained_object_handles": "0",
      "all_records": [
  .. < i truncated output >

Migrating Chef CouchDB to Multi-Master CouchDB

* assuming you are using ubuntu/debian
chef-server = 192.168.1.10
couchdb01 = 192.168.1.11
couchdb02 = 192.168.1.12

Enable chef-server couchdb to listen on all interfaces

root@chefserver:~# sed -i bak s/bind_address = 127.0.0.1/bind_address = 0.0.0.0/g /etc/couchdb/default.ini
root@chefserver:~# /etc/init.d/couchdb restart

Install CouchDB on couchdb01/couchdb02 and set to listen on all interfaces

root@couchdb01:~# apt-get -y install couchdb
root@couchdb01:~# /etc/init.d/couchdb stop
root@couchdb01:~# sed -i bak s/bind_address = 127.0.0.1/bind_address = 0.0.0.0/g /etc/couchdb/default.ini
root@couchdb01:~# /etc/init.d/couchdb start

Create the empty chef table on couchdb01/couchdb02

root@couchdb01:~# curl -X PUT http://localhost:5984/chef
{"ok":true}

root@couchdb02:~# curl -X PUT http://localhost:5984/chef
{"ok":true}

Push the chef table from chef-server to couchdb01/02 and enable a continuous replication

To Couchdb02

root@chef-server:/var/lib/couchdb# curl -X POST http://localhost:5984/_replicate -H "Content-Type: application/json" -d '{"source":"chef","target":"http://192.168.1.12:5984/chef","continuous":true}'
{"ok":true,"_local_id":"77f057c373dca43097fac542c367b24f"}

To Couchdb01

root@chef-server:~# curl -X POST http://localhost:5984/_replicate -H "Content-Type: application/json" -d '{"source":"chef","target":"http://192.168.1.11:5984/chef","continuous":true}'
{"ok":true,"_local_id":"e926c9297e5776db862ae3c1be27bbde"}

Setup the multi-master replication for couchdb01/02

Enable continuous replication FROM couchdb01 to couchdb02

root@couchdb01:/var/lib/couchdb# curl -X POST http://localhost:5984/_replicate -H "Content-Type: application/json" -d '{"source":"chef","target":"http://192.168.1.12:5984/chef","continuous":true}'
{"ok":true,"_local_id":"77f057c373dca43097fac542c367b24f"}

Enable continuous replication FROM couchdb02 to couchdb01

root@couchdb02:/var/lib/couchdb# curl -X POST http://localhost:5984/_replicate -H "Content-Type: application/json" -d '{"source":"chef","target":"http://192.168.1.11:5984/chef","continuous":true}'
{"ok":true,"_local_id":"77f057c373dca43097fac542c367b24f"}

Install Apache and generate config on Chef Server

root@chefserver:~# apt-get -y install apache2
root@chefserver:~# mkdir -p /usr/share/chef-server/public
root@chefserver:~# for i in rewrite proxy status proxy_http proxy_balancer headers ; do a2enmod $i ; done
root@chefserver:~# cd /etc/apache2/sites-available
root@chefserver:~# echo "Listen 5984" | tee -a chef_couchdb_loadbalancer
root@chefserver:~# echo '<VirtualHost *:5984>' | tee -a chef_couchdb_loadbalancer
root@chefserver:~# MYHOST=$(hostname -f)
root@chefserver:~# echo "ServerName ${MYHOST}-couchdb" |tee -a chef_couchdb_loadbalancer
root@chefserver:~# cat>>chef_couchdb_loadbalancer<<EOF
<Proxy balancer://couchlb>
BalancerMember http://192.168.1.11:5984
BalancerMember http://192.168.1.12:5984
</Proxy>
ProxyPass / balancer://couchlb
ProxyPassReverse / balancer://couchlb
DocumentRoot /usr/share/chef-server/public
LogLevel info
ErrorLog /var/log/chef/chef_couchdb_apache2-error.log
CustomLog /var/log/chef/chef_couchdb_apache2-access.log combined
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://couchlb%{REQUEST_URI} [P,QSA,L]
</VirtualHost>
EOF

Stop Couchdb on Chef-Server

*This will stop the continuous replication to couchdb01/couchdb02

root@chefserver:~# /etc/init.d/couchdb stop

Start Apache load balancer on Chef-Server

root@chefserver:~# a2ensite chef_couchdb_loadbalancer
root@chefserver:~# /etc/init.d/apache2 restart

Test your couchdb balancer:

http:://192.168.1.10:5984/_utils

You probably also want to edit your init scripts on couchdb01/02 to automatically restart the continuous replication upon start or restart
The replication does not persist after you stop a couchdb instance unless you explicitly issue the command again

Chef Server – Threading Merb – Chef Server API Service

This will only make your api system faster if you have enough cpus to support it.
It should be 1 merb worker per core ( * i think * )

Install Apache and enable the necessary mods

$ apt-get -y install apache2
$ mkdir -p /usr/share/chef-server/public
$ for i in rewrite proxy status proxy_http proxy_balancer headers ; do a2enmod $i ; done

Stop Chef-Server

$ /etc/init.d/chef-server stop

Edit your Chef configuration file ( Replace Worker Threads and Port Numbers as needed )

$ sed -i s/PORT=4000/PORT=5000/g /etc/default/chef-server
$ echo "WORKERTHREADS=4" | tee -a /etc/default/chef-server

Edit the Chef init script ( Back it up first )

$ cp /etc/init.d/chef-server /etc/init.d/chef-server.original
$ sed -i '35s/DAEMON_OPTS="-p/DAEMON_OPTS="-c $WORKERTHREADS -p/g' /etc/init.d/chef-server
$ sed -i '42s/(ps/#(ps/g' /etc/init.d/chef-server
$ sed -i '/#(ps/ i (ps -fp $pid | egrep -q "merb.*( chef-server .*api.* spawner|worker .* $PORT)") || return 1' /etc/init.d/chef-server

Run diff to see the init script differences

$ diff /etc/init.d/chef-server /etc/init.d/chef-server.original
35c35
< DAEMON_OPTS="-c $WORKERTHREADS -p $PORT -e production -d -a $ADAPTER -P $PIDFILE -L $LOGFILE -C $CONFIG -u $USER -G $GROUP -V" --- > DAEMON_OPTS="-p $PORT -e production -d -a $ADAPTER -P $PIDFILE -L $LOGFILE -C $CONFIG -u $USER -G $GROUP -V"
42,43c42
< (ps -fp $pid | egrep -q "merb.*( chef-server .*api.* spawner|worker .* $PORT)") || return 1
<   #(ps -fp $pid | egrep -q "merb.*(merb : master|worker.*$PORT)") || return 1 --- >   (ps -fp $pid | egrep -q "merb.*(merb : master|worker.*$PORT)") || return 1

Get variables from your Chef config for apache config construction

$ THREADSCT=$(grep "WORKERTHREADS" /etc/default/chef-server |awk -F"=" '{print $2}')
$ NEWCOUNT=$(( THREADSCT -- ))
$ PORTPREFIX=$(grep "PORT" /etc/default/chef-server |awk -F"=" '{print $2}'| sed -e s/[0-9][0-9]$//g )
$ MYHOST=$(hostname -f)

Generate Your Apache Config – Generate the Load Balance Members

$ cd /etc/apache2/sites-available
$ echo "Listen 4000" |tee -a chef_loadbalancer
$ echo '<VirtualHost *:4000>' |tee -a chef_loadbalancer
$ echo "ServerName $MYHOST" |tee -a chef_loadbalancer
$ echo "" |tee -a chef_loadbalancer
$ echo "<Proxy balancer://cheflb>" | tee -a chef_loadbalancer
$ seq -w 00 $NEWCOUNT | while read i ; do echo "BalancerMember http://127.0.0.1:${PORTPREFIX}${i}" |tee -a chef_loadbalancer ; done

Append your Apache config with the rest of the relevant information

$ cat>>chef_loadbalancer<<EOF
</Proxy>
ProxyPass / balancer://cheflb
ProxyPassReverse / balancer://cheflb
DocumentRoot /usr/share/chef-server/public
LogLevel info
ErrorLog /var/log/chef/chef_server_apache2-error.log
CustomLog /var/log/chef/chef_server_apache2-access.log combined
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://cheflb%{REQUEST_URI} [P,QSA,L]
</VirtualHost>
EOF

Enable your new apache config and start Apache and Chef-Server

$ a2ensite chef_loadbalancer
$ /etc/init.d/apache2 restart
$ /etc/init.d/chef-server start

Test your chef server

$ time knife node list
$ time knife role list

Reference:

Much of this information was stolen from : http://mrmiller.nonesensedomains.com/2010/06/15/chef-performance-tuning-part-1/
If this technique is outdated please make me aware of it or if my apache configuration is awful ( which i’m sure it is, i’m just too lazy to improve it )

Chef Server – Backup and Restore

* This assumes you’re using the regular debian/ubuntu install
and that you have a system with a fresh install of chef-server laying around

Backup your files on the OLD/Original Chef Server
This backs up all the chef configs/certs and couchdb configs and data

$ tar czvf chef-backup-`date +%Y-%m-%d-%s`.tar.gz /etc/couchdb /var/lib/chef /var/lib/couchdb /var/cache/chef /var/log/chef /var/log/couchdb /etc/chef

Transfer your files to the NEW Chef Server

$ scp old-chefserver:~/chef-backup*.tar.gz /tmp

Stop all chef services on the NEW Chef Server

$ for i in chef-server-webui chef-server rabbitmq-server jetty couchdb chef-solr ; do /etc/init.d/${i} stop ; done

Extract the backup file on to the NEW Chef Server

$ cd /tmp
$ tar xzvf chef-backup*.tar.gz

Delete the old rabbitmq data from your default installation on the NEW Chef Server

$ rm -fr /var/lib/rabbitmq/mneisa/*

Start rabbitmq in the foreground and recreate your user and tables on the NEW Chef Server

$ rabbitmq-server
$ rabbitmqctl add_vhost /chef
Creating vhost "/chef" ...
...done.
$ rabbitmqctl add_user chef testing
Creating user "chef" ...
...done.
$ rabbitmqctl set_permissions -p /chef chef ".*" ".*" ".*"
Setting permissions for user "chef" in vhost "/chef" ...
...done.
$ rabbitmqctl stop
Stopping and halting node rabbit@ubuntu02 ...
...done.

Start all the Chef Services on NEW Chef Server

$ for i in chef-solr couchdb jetty rabbitmq-server chef-server chef-server-webui ; do /etc/init.d/${i} start ; done

Verify your Chef Server is restored
http://NEW-CHEF-SERVER:4040
and verify the nodes and data or try the following knife commands

$ knife node list
$ knife node show mynode -a node

Alternate Backup Methods from Opscode:

http://wiki.opscode.com/display/chef/Backing+Up+Chef+Server