Setting up Simple MySQL Database Replication

ABOUT THE SETUP
I operate a busy OpenStack environment that is used by lots of people making changes everyday.  A typical day will see over 100 images spawned and terminates, but some of these last longer than a day.  Because OpenStack maintains a stateDB mapping the OpenStack layers to the KVM/Libvirt layers it’s critical to have not only backups, but mysql replication in place.  A failure 18 hours after my last backup could leave me scrambling through over 100 instances looking and performing some horrible manual fixes.

With that in mind I’m building a second controller for OpenStack which will keep a replicated copy of the database.  The plan is to add an HAProxy in to the mix to manage all connections from the Compute Nodes to the Controllers as well as incoming https connections from my users into the Controllers.

In the end, that’s where I’m going.  This article is just about the first part.  MySQL Replication.  BTW: This is Ubuntu 12.04

REFERENCE: https://dev.mysql.com/doc/refman/5.0/en/replication-howto.html

Like all of the documentation from MySQL it’s written at a pretty high-level and is 100% factually correct while also being slightly unclear and missing steps.   What is wrong with examples?  Why not add some?

Continue reading

Troubleshooting Slow Linux Systems

If you system is running slowly, and this goes for RHEL, Debian and other variants then take a look at this article which is a simple walkthrough of the tools you can use to solve problems.  These specific examples are from a system running Openstack, but that’s not important to most of you:

  • top – The place to start is generally the ‘top’ command which shows a resource summary and task list.
  • iotstat – Shows the reads and writes on your disk
  • iotop – Realtime iostat
  • iozone – Generate some test traffic to see how the system reacts.

Continue reading

Build an ISO to a Thumb Drive on the Mac

Reposted from Andrew King at work:

 

Insert your thumbdrive. You should see it pop up on your desktop, all nice and mounted, ready for you to use. Only it isn’t. If you can see it on your desktop, you can’t use it the way we need to.
$ diskutil list
 
You should see something like this:
/dev/disk0
   #:      TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme       *251.0 GB   disk0
   1:      EFI                          209.7 MB   disk0s1
   2:      Apple_CoreStorage            249.8 GB   disk0s2
   3:      Apple_Boot Recovery HD       814.4 MB   disk0s3
/dev/disk1
   #:      TYPE NAME                    SIZE       IDENTIFIER
   0:      Apple_HFS Macintosh HD      *249.5 GB   disk1
/dev/disk2
   #:      TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme       *1.0 TB     disk2
   1:      EFI                          209.7 MB   disk2s1
   2:      Apple_HFS My Passport        999.8 GB   disk2s2
/dev/disk3
   #:      TYPE NAME                    SIZE       IDENTIFIER
   0:      FDisk_partition_scheme      *1.0 GB     disk3
   1:      DOS_FAT_32 UNTITLED          1.0 GB     disk3s1
In the instance above, I’ve noted my thumbdrive – /dev/disk3. It happened to be “UNTITLED”, but it could be whatever name you may have given it. Anyhow, we essentially need to unmount everything on that disk, but not remove it completely from the system.
$ diskutil unmountDisk /dev/disk3
Unmount of all volumes on disk3 was successful
That solves that. Now, where is that ISO image you have? You need to know where it is so you can copy it to that thumbdrive. By the way – do this on a thumbdrive that is either backed up, or in other ways not useful (like old, and small, whatever). You’ll note the one I’m using is 1G. LOL! I got it with a Cisco router that I bought. It came with “management” software on it that sucks. But I digress. That thumbdrive, when it’s all over, will have NOTHING on it but the ISO you’re “burning” to it. On to the destructive part…
$ sudo dd if=~/Desktop/crunchbang-11-20130506-i486.iso of=/dev/rdisk3 bs=128k
6168+0 records in
6168+0 records out
808452096 bytes transferred in 186.954879 secs (4324317 bytes/sec)
I should note – you won’t see any activity (more on that in a second). You hit [Return] and the computer goes to work, not telling if it’s working or not. dd is one of those classic commands – it ain’t fancy, or pretty, or all high-falutin’. It does one thing, and it does exactly what you told it to do, or it errors. I should also note – if you screw up an point that to something important, like, in my disk list above, disk0? Well, you’ll have the joy of reinstalling the OS on your Mac, most likely.
Breakdown:
sudo - most of you know what this is. I hope. It allows you to do things as the “superuser” on your Mac (i.e., superuser do).
dd - command to convert and copy a file (I don’t know why it’s named that).
if=<filename> - input file
of=<filename> - output file
bs=n - block size… You can use 1m, 1k, 128k – you get the idea. I like 128k, there’s not much speed gained by anything bigger.
Some of you are looking at that output file location and thinking “Where in the heck did he get rdisk3 from??” Well… It’s the same location as disk3, only it’s a “raw” device connection. We’re stepping outside the rules a little bit – taking a little known shortcut that doesn’t have any stoplights, if you will. Or speed limits. You’ll essentially move the same data in about 1/6th of the time if you /dev/rdisk[n]
The last thing that you should do is eject it – but how do you do that, since you can’t drag it to the trash? diskutil still has you covered.
$ diskutil eject disk3
Disk disk3 ejected
If you really need to “see” something, you’ll need to have (or install) Pipe Viewer. Unless you already have some system like MacPorts orHomeBrew, have it working, and know how to use it, just suck it up and deal with not seeing something. Really. I mean it. Anyhow, maybe you’re a dork like me, and you have Pipe Viewer. Instead of the dd command we used initially, we’re going to modify it.
$ pv -petr ~/Desktop/crunchbang-11-20130506-i486.iso | sudo dd of=/dev/rdisk3 bs=128k
0:00:33 [4.11MiB/s] [===>                          ] 17% ETA 0:02:40
There you go. A nice little visual guide for elapsed time, how fast, and an estimate for how long it’s going to take. If you want to know more about Pipe Viewer, there’s a great article on it here.

 

PostgreSQL Replication to a Warm-Standby Using WAL Files

THEORY

Like a good relational database, PostgreSQL maintains a set of transactional log files known as write-ahead-logs (WAL) in the pg_xlog directory.  These logs are written to for every change in the database files and are used to recovery from a crash condition.  If you crash, replay all the WAL files since the last backup and you will be back in business right at the point of failure.

Well, if you have this capability, what about keeping a warm-standby system and feeding it all the WAL files.  If you teach the warm-standby how to continuously process the incoming write ahead logs from the live system you will have a system ready to go at a moments notice.  When you read about this setup in other places online the primary server is known as ‘master’ and the secondary the ‘slave’.

NOTE BENE: Both your primary and your secondary need to be running the same major version of the postgreSQL database. Continue reading

Restoring Files From RackSpace Cloud Files

If you are like me and have a cloud server on rackspace you probably have a backup of your server that runs weekly or daily but may have never found a nice way to access these files.  In fact, i was on chat with a Fanatical Support guy the other day shortly after I had deleted my httpd.conf file.  I asked him if I could restore a file using my cloud file backups and he said “No”.

That bothered me, but I don’t expect support the guys to be all knowing, even if it is a top-notch organization like Rackspace.  The real answer is yes.  Here is how it’s done.

If you are familiar with the API calls for interacting with RackSpace programmatically, you should probably skip this article, it’s going to be really basic.  If you want to learn these calls, then I found a nice article here that describes pulling and extracting the files for a Windows image and getting a .vhd file

ANATOMY OF A BACKUP

So logging in to the RackSpace Cloud interface and you should see a new(ish) addition to the Hosting Menu.  Choose “Cloud Servers” under the Open Cloud and then you’ll enter a new interface.  Once there click on “Files”  At this point you see your files.  Yes, you can see them in the old interface, but you cannot download them.

What I found was a set of files with a timestamp in them and a site ID.  One meta file that ends and .yml and describes all of the other compressed tarballs that contain the actual data.  You probably noticed that the tarballs are incremented (0, 1, 2, etc)

---
name: daily_20120827_111111_cloudserver1111111.yml
 format: tarball
 image_type: full
 files:
 - daily_20120827_111111_cloudserver111111.tar.gz.0
 - daily_20120827_111111_cloudserver111111.tar.gz.1
 - daily_20120827_111111_cloudserver111111.tar.gz.2

WHAT TO DO WITH THEM

If you have all the files in one directory you should be able to address them line this.  Remember, I’m trying to find my httpd.conf.  Well, this is going to find any and all httpd.conf file in the tar.gz files available.

for tarball in `ls -1 *cloudserver111111.tar.gz.*`
do
    recoveryfile=`tar -tzf $tarball | grep httpd.conf`
    tar -zxvf $tarball $recoveryfile
done

You will want to change the file you are looking for (httpd.conf) and the first line which defines the files you want to look through.  I’d use the find * command at the end to expose the directory structure that was created.

Viewing Your Linux Hardware with DMIDECODE

I never like opening a running system when I can simply query that system with a simply command for the information needed.  dmidecode is a great tool for polling hardware information in human-readable format.

In its simplest form you will dump all the information to the screen

dmidecode

but that’s a bit much so try running with the -t argument which lets you narrow down the search to the components (bios, system, baseboard, chassis, processor, memory, cache, connector, slot)  So, for instance, if need to learn how much RAM you system can handle:

# dmidecode -t memory
# dmidecode 2.10
SMBIOS 2.7 present.
# SMBIOS implementations newer than version 2.6 are not
# fully supported by this version of dmidecode.

Handle 0x0027, DMI type 16, 23 bytes
Physical Memory Array
    Location: System Board Or Motherboard
    Use: System Memory
    Error Correction Type: Single-bit ECC
    Maximum Capacity: 32 GB
    Error Information Handle: No Error
    Number Of Devices: 4

Enjoy and let me know you you end up using this command.

 

Building a NIS User Add Script

I have an environment where Solaris provides NIS for all the Solaris and Linux systems.  Every time I add a user I’ve had to alter a number of files and that’s pretty lame.

If you have any questions please ask.

#!/bin/bash
###################
# NewUser.sh creates a new user in the NIS environment and pushes that
# user information out to the server systems.
#
# NewUser.sh v1.0 - jay@zidea.com
#
###################
### Declarations
declare -rx SCRIPT=${0##*/}
declare USERNAME
declare FULLNAME
declare PASSWORD
declare USER_HOME
declare LASTID
declare USERID
### Checks if you have the right privileges
if [ "$USER" = "root" ]
then
#### Collect the variables
echo "" ;echo "" ;echo "" ;echo ""
 printf "%s\n" "Enter the user's name (firstname lastname): "
 echo "" ;echo "" ;echo "" ;echo ""
 read -e FULLNAME
printf "%s\n" "Enter the USERNAME (8 characters or less): "
 echo "" ;echo "" ;echo "" ;echo ""
 read -e USERNAME
# Other variables
USER_HOME="/home/$USERNAME"
 LASTID=`tail -1 /etc/passwd |cut -f3 -d:`
 USERID=`expr $LASTID + 1`
# Checks if the user already exists
 cut -d: -f1 /etc/passwd | grep "$USERNAME" > /dev/null
 OUT=$?
# Test for the account and build the files
 if [ $OUT -eq 0 ];then
 echo >&2 "ERROR: User account: \"$USERNAME\" already exists."
 echo >&2 "ERROR: User account: \"$USERNAME\" already exists." >> "$LOGFILE"
 else
 # Create a new user /usr/sbin/useradd
 /usr/sbin/useradd -u $USERID -d $USER_HOME -g staff -s /bin/bash -c "$FULLNAME" -m $USERNAME
 passwd $USERNAME
 PASSWORD=`grep $USERNAME /etc/shadow | cut -f2 -d:`
 echo $USERNAME:x::::: >> /etc/nis_etc/security/passwd.adjunct
 echo $USERNAME:$PASSWORD:$USERID:10:"$FULLNAME":$USER_HOME:/bin/bash >> /etc/nis_etc/passwd
 echo $USERNAME:$PASSWORD:14785:::::: >> /etc/nis_etc/shadow
# Restart the Yellow Pages (NIS)
 pushd /var/yp
 make
 popd
# Setup the $HOME Directory on svnfiles
 ssh root@home.server.com mkdir -pv /files/$USERNAME
 ssh root@home.server.com chown -R $USERID /files/$USERNAME
 ssh root@home.server.com chgrp -R wheel /files/$USERNAME
echo "The user \"$USERNAME\" has been created."
 fi
 exit 0
else
 echo >&2 "ERROR: You must be a root user to execute this script."
 exit 1
fi

Tuning mySQL – Because by default it’s not even close to tuned.

Basic tuning of the mySQL is accomplished in the /etc/my.cnf file. If you want to get all geeky and into this reference the seminal document over on the mysql dev site. This should result in a speed increase in your system.  It certainly has in my system running mySQL 5.x.

The information below is expressed as a set of ratios that begins with your system RAM and then works from there.

innodb_buffer_pool_size = $SYSTEMRAM/2
innodb_additional_mem_pool_size = $innodb_buffer_pool_size/20
innodb_log_file_size = $innodb_buffer_pool_size/4
innodb_log_buffer_size = $innodb_buffer_pool_size/50 or a minimum value of 8MB

Note bene: Changing your log file size can results in a mySQL refusing to start.  Simply remove these files from you mysql data directory and they will be created on the next startup.

Script to Move Database Location – mySQL

Don’t run this script.  It’s a concept that I haven’t tested and running it is pretty well guaranteed to crash your mysql server.  It’s designed to make the relocation of data faster, but I don’t have time to finish it today.

You should probably use this fellow link because it works… it’s just slower and manual.  Oh, and if you do get a scripting urge, please make this script work properly for me and post it in a comment.  Thanks.

 

USER=root
PASSWORD=yourpassword
DBS="$(mysql --user=$USER --password=$PASSWORD -Bse 'show databases')"
OLDDATA_DIR="/var/lib/mysql"
NEWDATA_DIR="/database/lib/mysql"

mkdir -pv $NEWDATA_DIR

for FILE in ${DBS[@]}; do
        DATABASE=`basename $FILE`
        echo cp -R $OLDDATA_DIR/$DATABASE $NEWDATA_DIR/$DATABASE
done

# Set permissions
chown -R mysql:mysql $NEWDATA_DIR

# Archive the old & link it to the new
mv $OLDDATA_DIR OLDDATA_DIR-old
ln -s $NEWDATA_DIR/$DATABASE $OLDDATA_DIR/$DATABASE

#get_mysql_option mysqld datadir "/database/lib/mysql"
sed -i  's|$OLDDATA_DIR|$NEWDATA_DIR|' /etc/init.d/mysqld
sed -i  's|$OLDDATA_DIR|$NEWDATA_DIR|' /etc/my.cnf

Setting up Apache Log File Rotation

This how-to walks users through setting up proper log file rotation for a multil-site Apache installation where the log file are broken out by site. I built all this on my own, but forgot about logfile rotation so now the log files just keep growing and growing.  Time to institute a log rotation algorithm.

For the most part when you are working with Unix you will find that the syslog daemon handles how messages are logged in you system, but Apache handles it’s own logs and the details are typically kept in the httpd.conf file.

sudo grep -i 'log' /etc/httpd/conf/httpd.conf /etc/httpd/conf.d/*
# Custom log file locations
LogLevel warn
ErrorLog  /var/www/html/site1.com/log/error.log
CustomLog /var/www/html/site1.com/log/access.log combined
# Custom log file locations
LogLevel warn
ErrorLog  /var/www/html/site2.com/log/error.log
CustomLog /var/www/html/site2.com/log/access.log combined
# Custom log file locations
LogLevel warn
ErrorLog  /var/www/html/site3.com/log/error.log
CustomLog /var/www/html/site3.com/log/access.log combined

So, grepping gives me a listing of logfile locations for each of the sites and as you can see they are all located in different directories.  You probably also noticed that there are logfiles in the con.d directory that I grepped for.  A lot of stuff will want to install there, like phpMyAdmin or webalizer or ssl.conf.  One other note, some installations will have their config files in an apache2 directory. Continue reading