PostgreSQL Replication to a Warm-Standby Using WAL Files

THEORY

Like a good relational database, PostgreSQL maintains a set of transactional log files known as write-ahead-logs (WAL) in the pg_xlog directory.  These logs are written to for every change in the database files and are used to recovery from a crash condition.  If you crash, replay all the WAL files since the last backup and you will be back in business right at the point of failure.

Well, if you have this capability, what about keeping a warm-standby system and feeding it all the WAL files.  If you teach the warm-standby how to continuously process the incoming write ahead logs from the live system you will have a system ready to go at a moments notice.  When you read about this setup in other places online the primary server is known as ‘master’ and the secondary the ‘slave’.

NOTE BENE: Both your primary and your secondary need to be running the same major version of the postgreSQL database. Continue reading

Restoring Files From RackSpace Cloud Files

If you are like me and have a cloud server on rackspace you probably have a backup of your server that runs weekly or daily but may have never found a nice way to access these files.  In fact, i was on chat with a Fanatical Support guy the other day shortly after I had deleted my httpd.conf file.  I asked him if I could restore a file using my cloud file backups and he said “No”.

That bothered me, but I don’t expect support the guys to be all knowing, even if it is a top-notch organization like Rackspace.  The real answer is yes.  Here is how it’s done.

If you are familiar with the API calls for interacting with RackSpace programmatically, you should probably skip this article, it’s going to be really basic.  If you want to learn these calls, then I found a nice article here that describes pulling and extracting the files for a Windows image and getting a .vhd file

ANATOMY OF A BACKUP

So logging in to the RackSpace Cloud interface and you should see a new(ish) addition to the Hosting Menu.  Choose “Cloud Servers” under the Open Cloud and then you’ll enter a new interface.  Once there click on “Files”  At this point you see your files.  Yes, you can see them in the old interface, but you cannot download them.

What I found was a set of files with a timestamp in them and a site ID.  One meta file that ends and .yml and describes all of the other compressed tarballs that contain the actual data.  You probably noticed that the tarballs are incremented (0, 1, 2, etc)

---
name: daily_20120827_111111_cloudserver1111111.yml
 format: tarball
 image_type: full
 files:
 - daily_20120827_111111_cloudserver111111.tar.gz.0
 - daily_20120827_111111_cloudserver111111.tar.gz.1
 - daily_20120827_111111_cloudserver111111.tar.gz.2

WHAT TO DO WITH THEM

If you have all the files in one directory you should be able to address them line this.  Remember, I’m trying to find my httpd.conf.  Well, this is going to find any and all httpd.conf file in the tar.gz files available.

for tarball in `ls -1 *cloudserver111111.tar.gz.*`
do
    recoveryfile=`tar -tzf $tarball | grep httpd.conf`
    tar -zxvf $tarball $recoveryfile
done

You will want to change the file you are looking for (httpd.conf) and the first line which defines the files you want to look through.  I’d use the find * command at the end to expose the directory structure that was created.

Hard Drive Related Cheatsheet for Linux

 

TERMINOLOGY:

  • Partition - a portion of physical hard disk space. A hard disk may contain one or more partitions. Partitions are defined by BIOS and described by partition tables stored on a harddrive.
  • Volume - a logical concept which hides the physical organization of storage space. A compatibility volume directly corresponds to a partition while LVM volume may span more than one partition on one or more physical disks. A volume is seen by users as a single drive letter.
  • Physical Volume (PV) Synonym for “hard disk”. A single physical hard drive.
  • Volume Group (VG) A set of one or more PVs which form a single storage pool. You can define multiple VGs on each system.
  • Logical Volume (LV) A usable unit of disk space within VG. LVs are used analogously to partitions on PCs or slices under Solaris: they usually contain filesystems or paging spaces (“swap”)Unlike physical partition can span multiple physical volumes that constitute VG. .
  • Root partition. Physical or logical partition what holds root filesystem and mount points for all other partitions. Can be physical partition or logical volume.

LVM Commands

  • pvcreate /dev/hda3 – creates physical volumes
  • vgcreate vg01 /dev/hda3 – creates a volume group (in this case, vg01) using the physical volume
  • lvcreate -l25000 -nlv01 vg01 – creates a logical volume using the volume group, allocating 25000 blocks
  • lvcreate -L4G -nlvroot vgraid1
  • vgextend – adds a volume to the volume group (if you add a new disk)
  • lvdisplay -v /dev/vg01/lv01
  • vgdisplay -v vg01
  • lvremove
  • vgreduce
  • mkfs -t ext3 /dev/vg01/lv02 makes a file system
  • mount /dev/vg01/lv02 /home/new mount the file system
  • mount -a mounts everything in /etc/fstab
  • vgscan –mknodes
  • vgchange -a y /dev/vgraid1 (bring /dev/vgraid1 online if it didn’t come up automatically)
  • lvscan
  • mkfs -t ext4 -t small /dev/vgraid5extra
  • umount /home/new (unmount a filesystem)
  • pvresize – update size of pv
  • lvextend -L+ /dev/vgraid0/lvsharedfiles0 
  • e2fsck -f /dev/vgraid1/lvsharedfilestemp (check filesystem)
  • resize2fs /dev/vgraid1/lvsharedfilestemp (resize the filesystem) (now works online)

MDADM stuff
RAID
raidtools2
mdadm (probably a better choice)
/etc/raid/raidtab
http://unthought.net/Software-RAID.HOWTO/
http://xtronics.com/reference/SATA-RAID-debian-for-2.6.html
http://juerd.nl/site.plp/debianraid

mdadm -Cv /dev/md0 -l1 -n2 missing /dev/sda1 –auto creates a degraded raid 1 array
mdadm -Cv -c 256 /dev/md20 -l5 -n2 missing /dev/sdf1 array with chunk size

mdadm -A -a /dev/md0 /dev/sda1

mdadm -A -a /dev/md1 /dev/sda2
fixing a degraded array
mdadm –add /dev/md7 /dev/sdd2

mkfs -t ext3 /dev/md20 – make a file system
mkfs -t ext3 -T largefile4 -E stride=16,stripe-width=2 /dev/md20 – reduce inodes, runs faster on big files

largefile4 is defined in /etc/mke2fs.conf, blocksize defaults to 4096, so with chunk size of 256, we need stride of 16

mdadm –detail –scan >> /etc/mdadm/mdadm.conf

Critical Warning Number 6 – (Fake or Scam)

Every now and then I hear a commercial on the radio that piques my interest.  In this case it’s the radio add “Critical Warning 6″ which I end up listening to every morning on the way to work in July and August 2012.  In part, it goes:

Something very big will happen in America within the next 180 days. It will be more devastating than the credit crisis of 2008. For most people, it will hit them like a brick wall. It will touch Americans harder and deeper than anything else we've seen since the Great Depression. Michael Lombardi feels so strongly about this, he's decided to present his "Critical Warning Number Six" in a new video. In case you're not familiar with him... Michael Lombardi has been widely recognized as predicting five major economic events over the past 10 years. In 2002, he started advising his readers to buy gold– related investments when gold traded under $300 an ounce. In 2006, he begged his readers to get out of the housing market...before it plunged.

My thoughts, and please share your own on this issue below:

  1. Scare Tactic – The voice used for the advertising is a threatening male voice about to tell you something you don’t want to hear.
  2. Timing – “Within the next 180-days… ”  As of this writing if you poke around on like you can find mention of this ‘prediction’ going back to November of 2011 and possibly further back.
  3. Verification of Past Predictions – I tried to use the Internet Archive Wayback Machine to see what this website looked like in the past.  After all, if Mr. Lombardi has posted his past predictions online we should be able to see them.  Unfortunately, the Wayback Machine doesn’t seem to have an old web archives of his content.
  4. Who Else Made These Predictions? – I read a book prior to the present economic crisis by Harry S. Dent that seems to say the same thing Michael Lombardi is saying, but the information is verifiable.  A few differences.  The Wayback machine shows Dent ready to publish his book back in July of 2008.   Dent sells a newsletter that essentially draws parallels to the great depression 80-some-years ago.  It’s a good read and I’m a firm believer that history is a fine indicator of the future.  But as the immortal Samuel Clemen once said, “History does not repeat itself, but it does rhyme”
  5. Why Share Your Knowledge? – If you lived in the future, let’s say the year 2022 and you traveled back to 2012 what would you do with your knowledge?  How would you leverage this knowledge? One way would be to setup a newsletter and charge $200/year sharing your knowledge with others.  The other would be to take your capital and make smart investments.  Then there is the third way; you could invest with your knowledge of the past and then share the knowledge as a newsletter.  If you invest first and have a large enough following you can make money as your followers invest and drive up prices.

I’ll probably come back to this and add as time allows.

About Jay Farschman - Jay currently works as a Senior Systems Administrator for an asset management company in Colorado where he works with companies that produce hardware, telecommunications software and financial services.  Jay previously owned a consulting company and provided training and consulting services for three Fortune 500 companies and numerous small businesses where he leveraged Linux to provided exceptional value.

Viewing Your Linux Hardware with DMIDECODE

I never like opening a running system when I can simply query that system with a simply command for the information needed.  dmidecode is a great tool for polling hardware information in human-readable format.

In its simplest form you will dump all the information to the screen

dmidecode

but that’s a bit much so try running with the -t argument which lets you narrow down the search to the components (bios, system, baseboard, chassis, processor, memory, cache, connector, slot)  So, for instance, if need to learn how much RAM you system can handle:

# dmidecode -t memory
# dmidecode 2.10
SMBIOS 2.7 present.
# SMBIOS implementations newer than version 2.6 are not
# fully supported by this version of dmidecode.

Handle 0x0027, DMI type 16, 23 bytes
Physical Memory Array
    Location: System Board Or Motherboard
    Use: System Memory
    Error Correction Type: Single-bit ECC
    Maximum Capacity: 32 GB
    Error Information Handle: No Error
    Number Of Devices: 4

Enjoy and let me know you you end up using this command.

 

REFERENCE – Files edited for Pentaho Setup.

These are the files I edited when setting up Pentaho on a RedHat/CentOS server.  This is really for personal reference, but if you have any questions I may be able to help

/opt/pentaho/biserver-ce/pentaho-solutions/system/applicationContext-spring-security-jdbc.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/dialects/mysql5/hibernate-settings.xml
/opt/pentaho/biserver-ce/tomcat/webapps/pentaho/META-INF/context.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/publisher_config.xml
/opt/pentaho/biserver-ce/tomcat/webapps/pentaho/WEB-INF/web.xml
/opt/pentaho/biserver-ce/tomcat/conf/server.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/pentaho.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/pentaho-spring-beans.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/applicationContext-spring-security-ldap.xml
/opt/pentaho/biserver-ce/pentaho-solutions/system/smtp-email/email_config.xml

# Authentication Location
/opt/pentaho/biserver-ce/pentaho-solutions/system/applicationContext-security-ldap.properties
/opt/pentaho/biserver-ce/pentaho-solutions/system/applicationContext-spring-security-ldap.xml

Installing Eclipse & VMWare Studio Explorer on Mac OSX Lion

Older version of the Mac OS require you to make a visit to Apple’s development site and download a JDK, but if you have Lion then you simply need to choose the proper download from the Eclipse Site.  I used Eclipse Classic 3.7.2.

  1. Visit the Eclipse Download Site and grab Eclipse Classic.
  2. It comes as a tar.gz file.  Expand it an place it in you Applications directory.
  3. Double-click on the Eclipse Application.
  4. “Preferences | Install/Update | Available Software Sites” and choose “Add”
  5. Enter the location of your http://10.1.1.101/eclipse/update and give it a name like “VMWare VM Studio”
  6. Click the “Reload” button.  If your new site is properly setup it should come back error free and you can close that dialog.
  7. Click “Help | Install New Software…” and use the drop down to select your new site.
  8. Use the triangle to explore the available software, choose “VMWare Studio Plugin for Eclipse” and select the “Next” button.
  9. After working out all the dependencies, the installer will require you to accept a licensing agreement.  Click “Finish” when done.
  10. Select “Window | Show View | Other” and locate your VMware Studio Explorer.  Select it and click okay.
  11. You will have to restart Eclipse.

Once it comes back you will have a new explorer tab and a fine place to administer the VMware Studio from.

 

 

Building a NIS User Add Script

I have an environment where Solaris provides NIS for all the Solaris and Linux systems.  Every time I add a user I’ve had to alter a number of files and that’s pretty lame.

If you have any questions please ask.

#!/bin/bash
###################
# NewUser.sh creates a new user in the NIS environment and pushes that
# user information out to the server systems.
#
# NewUser.sh v1.0 - jay@zidea.com
#
###################
### Declarations
declare -rx SCRIPT=${0##*/}
declare USERNAME
declare FULLNAME
declare PASSWORD
declare USER_HOME
declare LASTID
declare USERID
### Checks if you have the right privileges
if [ "$USER" = "root" ]
then
#### Collect the variables
echo "" ;echo "" ;echo "" ;echo ""
 printf "%s\n" "Enter the user's name (firstname lastname): "
 echo "" ;echo "" ;echo "" ;echo ""
 read -e FULLNAME
printf "%s\n" "Enter the USERNAME (8 characters or less): "
 echo "" ;echo "" ;echo "" ;echo ""
 read -e USERNAME
# Other variables
USER_HOME="/home/$USERNAME"
 LASTID=`tail -1 /etc/passwd |cut -f3 -d:`
 USERID=`expr $LASTID + 1`
# Checks if the user already exists
 cut -d: -f1 /etc/passwd | grep "$USERNAME" > /dev/null
 OUT=$?
# Test for the account and build the files
 if [ $OUT -eq 0 ];then
 echo >&2 "ERROR: User account: \"$USERNAME\" already exists."
 echo >&2 "ERROR: User account: \"$USERNAME\" already exists." >> "$LOGFILE"
 else
 # Create a new user /usr/sbin/useradd
 /usr/sbin/useradd -u $USERID -d $USER_HOME -g staff -s /bin/bash -c "$FULLNAME" -m $USERNAME
 passwd $USERNAME
 PASSWORD=`grep $USERNAME /etc/shadow | cut -f2 -d:`
 echo $USERNAME:x::::: >> /etc/nis_etc/security/passwd.adjunct
 echo $USERNAME:$PASSWORD:$USERID:10:"$FULLNAME":$USER_HOME:/bin/bash >> /etc/nis_etc/passwd
 echo $USERNAME:$PASSWORD:14785:::::: >> /etc/nis_etc/shadow
# Restart the Yellow Pages (NIS)
 pushd /var/yp
 make
 popd
# Setup the $HOME Directory on svnfiles
 ssh root@home.server.com mkdir -pv /files/$USERNAME
 ssh root@home.server.com chown -R $USERID /files/$USERNAME
 ssh root@home.server.com chgrp -R wheel /files/$USERNAME
echo "The user \"$USERNAME\" has been created."
 fi
 exit 0
else
 echo >&2 "ERROR: You must be a root user to execute this script."
 exit 1
fi

Tuning mySQL – Because by default it’s not even close to tuned.

Basic tuning of the mySQL is accomplished in the /etc/my.cnf file. If you want to get all geeky and into this reference the seminal document over on the mysql dev site. This should result in a speed increase in your system.  It certainly has in my system running mySQL 5.x.

The information below is expressed as a set of ratios that begins with your system RAM and then works from there.

innodb_buffer_pool_size = $SYSTEMRAM/2
innodb_additional_mem_pool_size = $innodb_buffer_pool_size/20
innodb_log_file_size = $innodb_buffer_pool_size/4
innodb_log_buffer_size = $innodb_buffer_pool_size/50 or a minimum value of 8MB

Note bene: Changing your log file size can results in a mySQL refusing to start.  Simply remove these files from you mysql data directory and they will be created on the next startup.