Author: Michael Blood
Automating patch installation on XenServer
Automating patch installation on XenServer
I have four instances of freshly installed XenServer 6.2 and there are about a dozen patches for each that need to be applied. Herein, I will attempt to somewhat automated the application of these patches.
What you will need to know:
The list of patches required
The URLs of the download pages for each patch to be applied
The UUID of the patch
The UUID of the Target Host. This can be found by doing, on the console,
a:
xe host-list
The procedure is as follows:
Use the wget command to download the patch. I’ll use Service Pack 1 as an example. We want that patch first, as it is cumulative, and will cut down the number of other patches to be installed.
To find the URL where SP1 resides. go to XenCenter Console, Tools, Check for Updates. This will give you a list of patches available for your server, with links to the download location. Click on the link for XS62ESP1. On the web page that opens, click on “Download”. This will open another page. This is the URL that you want. Copy it to clipboard.
You’ll need the urls and filenames for all the applicable patches when you customize your script.
http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip
Now initiate the wget command at the console in XenCenter using this URL:
wget http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip NOTE:Case Sensitive!
Then unzip it:
unzip XS62ESP1.zip
This zip file contains a file ending with the extension “xsupdate” as in “XS62ESP1.xsupdate. That’s our patch
Register the patch on the Target Server:
xe patch-upload file-name=XS62ESP1.xsupdate
This will display the uuid of the patch. We’ll need that for the next command. You can call it up again with:
xe patch-list
Install the patch:
xe patch-apply uuid=<The patch uuid from th xe patch-list command> host-uuid=<The host uuid from the xe host-list command>
Due to limited disk space, we’re going to clean out our working directory:
rm *
Now we’ll write our script: Notice that we’re doing the upload/registration and apply in one command.
Start of Script:
wget http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip
unzip XS62ESP1.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/8737/XS62ESP1002.zip
unzip XS62ESP1002.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1002.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9058/XS62ESP1005.zip
unzip XS62ESP1005.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1005.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9491/XS62ESP1008.zip
unzip XS62ESP1008.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1008.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9617/XS62ESP1009.zip
unzip XS62ESP1009.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1009.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9698/XS62ESP1011.zip
unzip XS62ESP1011.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1011.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9703/XS62ESP1013.zip
unzip XS62ESP1013.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1013.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9708/XS62ESP1014.zip
unzip XS62ESP1014.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1014.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10128/XS62ESP1015.zip
unzip XS62ESP1015.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1015.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10134/XS62ESP1012.zip
unzip XS62ESP1012.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1012.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10174/XS62ESP1016.zip
unzip XS62ESP1016.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1016.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
Now just copy the text you created with your patches listed in place of mine and paste in to to the console. You’ll be off and running! (Go get Coffee)
Matt Long
02/17/2015
Connecting to a database with PHP
Connecting to a database with PHP
Install these packages:
#apt-get install apache2
#apt-get install mysql
#apt-get install php
#apt-get install php5-mysql
Create a test user, password and database
At the sql server, Log into mysql:
#mysql -u root -p
Issue the following commands to create a user “test” and a password “password”:
CREATE USER ‘test’@’localhost’ IDENTIFIED BY ‘password’;
CREATE USER ‘test’@’%’ IDENTIFIED BY ‘password’;GRANT ALL ON *.* TO ‘test’@’localhost’;
GRANT ALL ON *.* TO ‘test’@’%’;CREATE DATABASE instruments
Exit mysql:
q
Log back in as the user you just created, attaching to the new database:
mysql -u test -p instruments
Execute a
s
to see the status. Verify the user and database.
Test PHP Functionality:
Create a file named “something”.php and insert the following text:
<?php echo ‘hello world’.time();
/* <?php echo ‘mysqli_connect(); print_r(mysqli_query(‘select now()’)) ; ?> */
?>
Place this file in the /var/www directory
Open a browser and point to that file:
http://<your server>”something”.php
You should see hello world and the date.
To test your connection to the database via PHP:
Create a file with the following text and name it “something”.php
Edit the line “$db = mysql_connect(“206.207.94.34″,”test”,”password”);” to reflect your server & user.
<?php
$db = mysql_connect(“206.207.94.34″,”test”,”password”);
if (!$db) {
die(“Database connection failed miserably: ” . mysql_error());
}
elsedie(“Database Success!!!: ” . mysql_error());
$db_select = mysql_select_db(“instruments”,$db);
if (!$db_select) {
die(“Database selection also failed miserably: ” . mysql_error());
}
?>
<html>
<head>
<title>Step 3</title>
</head>
<body>
<?php
$result = mysql_query(“SELECT * FROM mytable”, $db);
if (!$result) {
die(“Database query failed: ” . mysql_error());
}
?>
</body>
</html>
Place this file in the /var/www directory
Open a browser and point to that file:
http://<your server>”something.php
Success!!!
HANDY MYSQL COMMANDS:
Note that all text commands must be first on line and end with ‘;’
? (?) Synonym for `help’.
clear (c) Clear the current input statement.
connect (r) Reconnect to the server. Optional arguments are db and host.
delimiter (d) Set statement delimiter.
edit (e) Edit command with $EDITOR.
ego (G) Send command to mysql server, display result vertically.
exit (q) Exit mysql. Same as quit.
go (g) Send command to mysql server.
help (h) Display this help.
nopager (n) Disable pager, print to stdout.
notee (t) Don’t write into outfile.
pager (P) Set PAGER [to_pager]. Print the query results via PAGER.
print (p) Print current command.
prompt (R) Change your mysql prompt.
quit (q) Quit mysql.
rehash (#) Rebuild completion hash.
source (.) Execute an SQL script file. Takes a file name as an argument.
status (s) Get status information from the server.
system (!) Execute a system shell command.
tee (T) Set outfile [to_outfile]. Append everything into given outfile.
use (u) Use another database. Takes database name as argument.
charset (C) Switch to another charset. Might be needed for processing binlog with multi-byte charsets.
warnings (W) Show warnings after every statement.
nowarning (w) Don’t show warnings after every statement.
For server side help, type ‘help contents’
Matt Long
01/27/2015
Getting Started with Mysql and Heidi SQL
Getting Started with Mysql and Heidi SQL
At the target server prompt, become root and type:
apt-get install mysql-server
provide a a password for the root mysql user
Verify that mysql is running:
netstat -tap | grep mysql
you should see something like this:
tcp 0 0 localhost:mysql *:* LISTEN 21921/mysqld
Verify that you can log in:
mysql -u root -p
You should be prompted for the root password and your prompt should change to:
mysql>
Type “help” or “?” to review the commands.
Type to exit:
q
To configure Mysql for remote connections edit /etc/mysql/my.cnf
Verify the port number. The default is 3306
Set the bind-address to 0.0.0.0
Log in to mysql as above and execute the following commands.
CREATE USER ‘myuser’@’localhost’ IDENTIFIED BY ‘mypass’;
CREATE USER ‘myuser’@’%’ IDENTIFIED BY ‘mypass’;
GRANT ALL ON *.* TO ‘myuser’@’localhost’;
GRANT ALL ON *.* TO ‘myuser’@’%’;
Note: the ‘localhost’ and ‘%’ are the correct syntax. Only change myuser and mypass.
q
Restart mysql:
/etc/init.d/mysql restart
You should be able to install Heidi SQL and log in now.
Matt
01/23/2015
Configuring PostgreSQL for access with PGAdmin
Configuring PostgreSQL for access with PGAdmin
Download and install PGAdmin on your workstation.
At the postgresql server that you want to connect to, the configuration files for PostgreSQL are stored in:
/etc/postgresql//main
Edit the postgesql.conf file, uncomment the line and add the server’s ip address:
listen_addresses = ‘ip_address’
ssl = true
ssl_ciphers = ALL
password_encryption = on
You can also change the default port number here with the line:
port = 5432
Make note of the port number that you use.
edit the pq_hba.conf file and include a line:
host all all 192.168.1.0/24 md5
substitute the ip address for whatever your local network uses.
If your login in PGAdmin fails, the error message may report a different ip address than what is defined in your workstation.
use the reported ip address in the pg_hba.conf file.
After making changes to the postgrsql.conf file, restart the database server:
/etc/init.d/postgresql restart
PGAdmin should now be able to open a connection to your database server. Log in with your postgresql username and password.
Matt 01/23/2015
AWS Auto Scaling Group – CodeDeploy Challenges
AWS Auto Scaling Group – CodeDeploy Challenges
First here is my setup
- A single development / test server in the AWS cloud, backed by a separate Git Repository.
- WHen code is completed in the development environment it is commited to the development branch (using whichever branching scheme best fits the project)
- At the same time the code is merged to the test branch, and the code is available for client testing on the ‘test_stage’ site if they would like
- Then on an as needed basis the code in in the test branch (on the test_stage server) is deployed to AWS using their CodeDeploy api
- git archive test -> deploy.zip
- upload the file an S3 bucket (s3cmd)
- register the zip file as a revision using the AWS Register Revision API call
- This creates a file that can be deployed to any deployment group
- I setup two groups in my AWS account , test and live.
- When the client is ready, I run a script which deploys thes the ziped up revision to the Test server, where they are able to look atit and approve.
- Then I use the same method but move it instead of the www deployment group.
(The complexities of setting this up are deeper than I am going in this article, but for future prospects, all of this programming knowledges is stored in our deploy.php file)
A couple of tricks “they” dont tell you.
- Errors can be difficult to debug – if you update your code deployment to do more verbose logging it can help you to determine what some of the errors were.
- update /etc/codedeploy-agent/conf/codedeployment.yml, set verbose to yes.
- restart the service /etc/init.d/code-deployment restart (it can take several minutes to restart, this is normal)
- tail the log files to watch a deployment in real time, or investigate it after the fact (tail /var/log/aws/codedeploy-agent)
- Deploying a Revision to servers while they may be going through some termination instability, may likely cause your deployment to fail when one of you servers terminates.
- To prevent this, update the deployment autoscaling plan to have a minim and a maximum of the server, and do not take it under load during the 10 – 15 minutes (up to 2 hours) issues will cause errors
- Depending on the load on your servers, your deployment could take a lot of cpu and could generate an autoscaling alert and could spin up new tasks or send you an email. There is not a correct way to deal with this, however it is a good idea to know about it before you deploy.
- Finally the item that I wrote this because of, it appears that when you attempt to deploy a revision to an autoscaling group, it can cause some failures.
- The obvious one is that the deployment will fail if it is attempted while the server is shutting down
- However, it seems that if you have decided to upgrade your AMI, and your Launch Configuration, that a deployment will fail. And for me, it actually caused a key failure to login as well (this could have been because of multiple server terminations and then another server took over the IPs within a few minutes) Anyway, much caution about these things.
UPDATE:
Well, the problem was actually that the by ‘afterinstall.sh’ script, was cleaning up the /opt/codedeployment/ directory (so we didn’t run out of space after a couple dozen deployments), but I was also removing the appspec.yml file.
So I updated the command that runs in the afterinstall to be
/usr/bin/find /opt/codedeploy-agent/deployment-root/ -mindepth 2 -mtime +1 -not -path '*deployment-instruction*' -delete
Debugging CodeDeployment on AWS
Debugging CodeDeployment on AWS
This article is being written well after I have already installed the CodeDeployment daemon on an ubuntu server, created an AMI out of it and set it up as an auto launch server from a Scaling Group.
I am documenting the process I went through to dig into the error a bit more, this helps to identify and remember where the logs files are and how to get additional information, even if the issue is never the same again.
Issues with running the codedeployment showed up as a python error in the log
more /var/log/aws/codedeploy-agent/codedeployment-agent.log put_host_command_complete(command_status:"Failed",diagnostics:{format:"JSON",payload:"{"error_code":5,"script_name":"","message":"not opened for reading","log":""}"
I decided this is not enough information to troubleshoot an error so I had to dig in and fina way to make it more verbose. I found this file, just update verbose from false to true
vi /etc/codedeploy-agent/conf/codedeployagent.yml
Then restart the codedeploy-agent
/etc/init.d/code-deployagent restart
This can take quite a while since it runs quite a bit of background installing and checking for duplicate processes. but once it is complete you can check that the process is running again.
ps ax|grep codedeploy
Once this is running in verbose mode, monitor the log
tail -f /var/log/aws/codedeploy-agent/codedeployment-agent.log
and re-run the deployment and view the results of the log, the most useful thing for me was to grep for the folder that more specific error information was written to.
grep -i "Creating deployment" /var/log/aws/codedeploy-agent/codedeployment-agent.log
This showed me the folder that all of the code WAS going to be extracted to, since there was an error the system actually dumped the contents of an error into a file called bundle.tar in the folder that it would have exported to.
cat /opt/codedeploy-agent/deployment-root/7ddce865-0611-45f0-bf74-459fcf806f23/d-YK4NWBJD7/bundle.tar
This returned an error from the S3 showing that the Code Deploy was having an error downloading from S3, so I had to add access to the policy to download from S3 buckets as well
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials – Debug / My Fix
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials – Debug / My Fix
I created a Policy and Launch Configuration according to this documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-service-role.html
However I still received this error
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials - please check if this instance was started with an IAM instance profile
The problem lay somewhere in the configuration of how I setup and Launched the server, however I set about exploring the server to evaluate and confirm the ‘missing credentials’.
Using these checks I would be able to confirm that the reason behind these errors (and there fore the problem with actually deploying code):
First run this command to check to see what roles were ‘requested’
echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/ -O - -q `
This command will return the list of Roles that were given to your server. You can then extend the request
MyRoleName
Then add the return on to the end of the URL to see the results of attempting to add the role, in my case the error was displayed
{ "Code" : "AssumeRoleUnauthorizedAccess", "Message" : "EC2 cannot assume the role MyRoleName. Please see documentation at http://docs.amazonwebservices.com/IAM/latest/UserGuide/RolesTroubleshooting.html.", "LastUpdated" : "2015-02-17T04:38:25Z" }
Basically, the EC2 was unable to assume the role MyRoleName did not have permission to Assume a Role, This could be due to the permissions of the development account that was used to start eh Scaling Group which launched the EC2. To test this,
- I login with an administrator account
- recreate the scaling group, identical
- login to the server
- run the echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/ -O – -q ` command
This didn’t do anything, So I thought I would look into the specifics of why we have a role that should be able to be assumed, but which has a message which explains that EC2 cannot assume the role.
So I looked back at the article http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-service-role.html and found somewhat of an anomaly, it seems that the article suggests that we build a trust relationship with gives the ‘codedeploy.us-west-2.amazonaws.com’ service the ability to use this policy. However, that does not jive with the Messages I see in the logs defining that EC2 is not able to assume the role. So I opened the trust relationship under the Role, and Added the bolded line
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com", "codedeploy.us-east-1.amazonaws.com", "codedeploy.us-west-2.amazonaws.com"
] }, "Action": "sts:AssumeRole" } ] }
Lo and Behold, it worked. Now when I run the following I get a Success Message
echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRoleName-O - -q `
So, it turns out the problem is that the article is either incorrect or was written in order to give the CodeDeploy service the ability to work on EC2, but not giving the EC2 servers access to the CodeDeploy service. ( the codedeploy.us-east-1 services are also required in order to to give the deployment group the IAM role.)
While, it was difficult to find the solution, this troubleshooting steps above are useful to help identify related or other issues, I hope you find some use from the tools
Michael Blood
Compare the packages (deb / apache) on two debian/ubuntu servers
Compare the packages (deb / apache) on two debian/ubuntu servers
Debian / Ubuntu
I worked up this command and I don’t want to lose it
#diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'
This command shows a list of all of the packages installed on 111.222.33.44 that are not installed on the current machine
To make this work for you, just update the ssh 111.222.33.44 command to point to the server you want to compare it with.
I used this command to actually create my apt-get install command
#apt-get install `diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'`
Just be careful that you have the same Linux kernels etc, or you may be installing more than you expect
Apache
The same thing can be done to see if we have the same Apache modeuls enabled on both machines
diff <(a2query -m|awk '{print $1}'|sort) <(ssh 111.222.33.44 a2query -m|awk '{print $1}'|sort)
This will show you which modules are / are not enabled on the different machines
Installing s3tools on SUSE using yast
Installing s3tools on SUSE using yast
We manage many servers with multiple flavors of Linux. All of them use either apt or yum for package management.
The concept of yast is the same as apt and yum, but was new to me, so I thought I would document it.
Run yast which pulls up an ncurses Control Center, use the arrows go to Software -> Software Repositories
#yast
Use the arrows or press Alt+A to add a new repository
I selected Specify URL (the default) and press Alt+x to go to the next screen where I typed into the url box
http://s3tools.org/repo/SLE_11/
and then pressed Alt+n to continue.
Now I have a new repository and I press Alt+q to quit.
At the command line I types
#yast2 -i s3cmd
And the s3cmd is installed, 15 minutes!
Bulk Domain NS, MX and A record lookup tool
Summary: We have two tools to help you lookup information on domains quickly
- quick-domain-research.php – See the NS, MX, A records and IPs for multiple domains in one table
- nameserver-compare.php – Compare NS, MX, A records for multiple domains, against multiple Name Servers
Bulk Domain NS, MX and A record lookup tool
Occassionally, we come across some sort of project in which we have to work through a list of multiple domain names and make some sort of changes.
In some cases we simply have to update contact records, in other cases we have to determine ownership, hosting and mail setups so we can assist with an ownership transfer.
There are a plethora of domain tools out there which help one at a time, But we were hard pressed to find a tool that could do a bulk lookup of multiple domains with table based out put.
So, we built the tool
https://www.matraex.com/quick-domain-research.php
This tool has the
- A records for the root domain (@) and the (www) domain.
- MX records for the root domain
- NS records for the root domain
This tool was thrown together quickly to help us identify whether an OLD but active nameserver, which had dozens of domain names on it, was actually being used for the domains.
We were able to delete more than 20 domains cluttering up the DNS entries.
Additionally we were able to clean up associated webservers that had not been cleaned of hosting accounts after a client left the account.
Some future ideas which will make their way in next time:
- Display whois information for the domain
- Optionally group the domains based on which name servers, whois records or www C class they are hosted at
Update 11/28/2015 by Michael Blood
Since this original post, we have added several new features including the ability to upload a file with a large batch upload, and download a CSV file with the results. You can see all of the details in this Enhanced Bulk Domain NS, MX and A record lookup tool post.