Category: Linux
SSL Cipher Suites – Apache config for IE 11
SSL Cipher Suites – Apache config for IE 11
In past posts I showed how I had followed some suggestions from qualsys on configuring Apache to only use specific ciphers in order to pass all of the required security scans.
However it turns out that blindly using their list of Ciphers led to another problem, (displaying the page in IE 11) which I describe the fix to below.
In addition though, the process I go through below, can / will help you trouble shoot and possibly find and enable / disable the Ciphers for any situation and browser.
On this page:
https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy
They suggest setting this SSLCipherSuite:
EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4
However I found IE 11 was showing “This web page can not be displayed” on Windows 7 and Windows 2008 Server (probably others as well),
I figured out that the problem was the CipherSuite, by commenting out the SSLCipherSuite line in apache, restarting, and the page loaded.
So the next step was to , with the line commented out, to run the ssllabs test with the SSLCipherSuite commented out,
https://www.ssllabs.com/ssltest/
the result of which I found to show some details about the CipherSuites used by different browsers. I would use this tool to make sure you have the correct CipherSuite for any, all browsers and exclude any older insecure browsers.
If you look down the report to the “Handshake Simulation portion of the report you will find a listing of browsers with the Cipher they used. IE 11/ Win 7 was working EVEN BEFORE noticed the ‘can not be displayed’ error, so I went on a hunch and decided to try and enable the IE 8-10 / Win 7 option which showed
TLS_RSA_WITH_AES_256_CBC_SHA
I googled “openssl TLS_RSA_WITH_AES_256_CBC_SHA” which brought me to the openssl page where they show all of the ciphers and on this page I found “AES256-SHA” which I needed to include in the Apache SSLCipherSuite directive
https://www.openssl.org/docs/apps/ciphers.html
Next, to confirm that this cipher is even available on my server, i ran this command
openssl cipher AES256-SHA
which returned a result showing that the cipher was indeed an option on the server
So, I added it towards the end, and the resulting SSLCipherSuite directive I have is:
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA AES256-SHA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"
And now I can load the webpage in the IE 11 browser.
Note that when I ran the ssllabs.com test again, it downgraded the site to an A- probably because the cipher did not offer Forward Secrecy (notated with a small orange ‘No FS’) on the report,
I decided that this is an okay grade in order to allow IE 11 to access the site, but hopefully Microsoft figures it out.
SSL Vulnerability and Problem Test – Online and Command Line
SSL Vulnerability and Problem Test – Online and Command Line
There are many vulnerabilities out there, and there seems to be no single test for all of them.
When working to correct SSL issues, some of the more comprensive tests, test EVERYTHING, while this is good, it can also make it difficult to test the smaller incremental changes that we make as system administrators make
This blog post is a way to collect and keep a resource in one place of links or methods we can use to quickly test individual failures
The big test, which only takes a minute or so, but is somewhat bloated for individual tests, is ssllabs.com. You will find out most failures here and even get a grade
http://ssllabs.com
But you wont find them all, and it is difficult to quickly test small changes. So here are some instant tests.
if you have an SSL Chain issue
openssl s_client -connect example.com:443
to test for CVE-2014-0224, otherwise know n as a CCS Injection vulnerability enter your domain here
http://ccsbug.exposed/
to test for CVE-2014-0160 or Heartbleed test or
http://possible.lv/tools/hb/
Verify ssl certificate chain using openssl
Verify ssl certificate chain using openssl
SSL Certificates ‘usually’ work and show ‘green’ in browsers, even if the full certificate chain is not correctly configured in apache.
You can use tools such as SSL Labs (link) or run a PCI ASV check on your site to find out if you are compliant, but a quicker way to do it is using openssl from the command link.
Using this command you can quickly verify your SSL Certificate and Certificate chain from you linux command line using openssl
openssl s_client -showcerts -connect mydomain.com:443
If you receive a line, ‘Verify return code: 0 ‘ at the end of the long out put, your chain is working, however you might receive an error 27 if it is not configured correctly.
In order to configure it correctly you will like need an line in your apache conf file
SSLCACertificateFile <yourCAfilename>
In addition to the files which list your Key and Cert file
SSLCertificateFile <yourcertfilename> SSLCertificateKeyFile <yourkeyfilename>
ip tables commands which ‘might’ make your firewall PCI compliant
ip tables commands which ‘might’ make your firewall PCI compliant
This is a list of the iptables commands that will setup a minimal firewall which ‘might’ be PCI compliant
This is primarily here to remind me, so I have a reference in the future.
I also have ports for FTP and SSH for a single developer IP as well as monitoring for a single monitoring server. The format is simple and can easily be changed for other services.
Be sure to replace ‘my.ip’ with your development ip, and ‘monitoring.ip’ with
This is on a Linux Ubuntu machine (of course)
apt-get install iptables iptables-persistent
iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -s my.ip/32 -j ACCEPT iptables -A INPUT -p tcp --dport 21 -s my.ip/32 -j ACCEPT iptables -A INPUT -p tcp --dport 5666 -s monitoring.ip/32-j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p udp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p udp --dport 443 -j ACCEPT iptables -A INPUT -j REJECT --reject-with icmp-host-unreachable iptables -A INPUT -p icmp --icmp-type timestamp-request -j DROP iptables -A OUTPUT -p icmp --icmp-type timestamp-reply -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN FIN,SYN -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags SYN,RST SYN,RST -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,PSH,URG -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP iptables -t raw -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP iptables-save > /etc/iptables/rules.v4
Installing tsung on an amazon t2.micro server
Installing tsung on an amazon t2.micro server
install ubuntu 14.04
#apt-get update #apt-get install erlang erlang-dev erlang-eunit #wget http://tsung.erlang-projects.org/dist/tsung-1.5.1.tar.gz #tar -xvzf tsung-1.5.1.tar.gz #cd tsung-1.5.1 #make #make install #tsung-recorder start
That is it!! you are now collecting data and you can run a recording session.
———————–read below for instructions on a failed attempt
Install Ubuntu 14.04, launch and run
#apt-get update #apt-get install tsung
still comes up with a crash report becuase tsung is attempting to use the wrong version of erlang, it seems that the tsung build expects a different version of erlang, perhaps becuase the versions that are considered the most up to date by debian are not compatile
—–read below if you want instructions that i started but did not work because amazons yum based AMI sucks compared to ubuntu
apt-Once you launch and connect to the Amazon server (i choose a small amazon server which already has the amazon cli tools installed)
#sudo yum update
#sudo yum --nogpgcheck install http://tsung.erlang-projects.org/dist/redhat/tsung-1.5.1-1.fc20.x86_64.rpm
#sudo ln -s /usr/bin/erl /bin/erl #(not sure why the package install erlang in one location and tsung looks in another ....)
Now you are ready to run the tsung command to record your session
#tsung-recorder start -d 7 -P htt
But you get the error below…
Starting Tsung recorder on port 8090
[root@ip-172-16-1-236 ~]# {"init terminating in do_boot",{undef,[{tsung_recorder,start,[]},{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Compare the packages (deb / apache) on two debian/ubuntu servers
Compare the packages (deb / apache) on two debian/ubuntu servers
Debian / Ubuntu
I worked up this command and I don’t want to lose it
#diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'
This command shows a list of all of the packages installed on 111.222.33.44 that are not installed on the current machine
To make this work for you, just update the ssh 111.222.33.44 command to point to the server you want to compare it with.
I used this command to actually create my apt-get install command
#apt-get install `diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'`
Just be careful that you have the same Linux kernels etc, or you may be installing more than you expect
Apache
The same thing can be done to see if we have the same Apache modeuls enabled on both machines
diff <(a2query -m|awk '{print $1}'|sort) <(ssh 111.222.33.44 a2query -m|awk '{print $1}'|sort)
This will show you which modules are / are not enabled on the different machines
Installing s3tools on SUSE using yast
Installing s3tools on SUSE using yast
We manage many servers with multiple flavors of Linux. All of them use either apt or yum for package management.
The concept of yast is the same as apt and yum, but was new to me, so I thought I would document it.
Run yast which pulls up an ncurses Control Center, use the arrows go to Software -> Software Repositories
#yast
Use the arrows or press Alt+A to add a new repository
I selected Specify URL (the default) and press Alt+x to go to the next screen where I typed into the url box
http://s3tools.org/repo/SLE_11/
and then pressed Alt+n to continue.
Now I have a new repository and I press Alt+q to quit.
At the command line I types
#yast2 -i s3cmd
And the s3cmd is installed, 15 minutes!
Disk write speed testing different XenServer configurations – single disk vs mdadm vs hardware raid
Disk write speed testing different XenServer configurations – single disk vs mdadm vs hardware raid
In our virtual environment on of the VM Host servers has a hardware raid controller on it . so natuarally we used the hardware raid.
The server is a on a Dell 6100 which uses a low featured LSI SAS RAID controller.
One of the ‘low’ features was that it only allows two RAID volumes at a time. Also it does not do RAID 10
So I decided to create a RAID 1 with two SSD drives for the host, and we would also put the root operating systems for each of the Guest VMs there. It would be fast and redundant. Then we have upto 4 1TB disks for the larger data sets. We have multiple identically configured VM Hosts in our Pool.
For the data drives, with only 1 more RAID volume I could create without a RAID 10, I was limited to either a RAID V, a mirror with 2 spares, a JBOD. In order to get the most space out of the 4 1TB drives, I created the RAIDV. After configuring two identical VM hosts like this, putting a DRBD Primary / Primary connection between the two of them and then OCFS2 filesystem on top of it. I found I got as low as 3MB write speed. I wasnt originally thinking about what speeds I would get, I just kind of expected that the speeds would be somewhere around disk write speed and so I suppose I was expecting to get acceptable speeds beetween 30 and 80 MB/s. When I didn’t, I realized I was going to have to do some simple benchmarking on my 4 1TB drives to see what configuration will work best for me to get the best speed and size configuration out of them.
A couple of environment items
- I will mount the final drive on /data
- I mount temporary drives in /mnt when testing
- We use XenServer for our virtual environment, I will refer to the host as the VM Host or dom0 and to a guest VM as VM Guest or domU.
- The final speed that we are looking to get is on domU, since that is where our application will be, however I will be doing tests in both dom0 and domU environments.
- It is possible that the domU may be the only VM Guest, so we will also test raw disk access from domU for the data (and skip the abstraction level provided by the dom0)
So, as I test the different environments I need to be able to createw and destroy the local storage on the dom0 VM Host. Here are some commands that help me to do it.
I already went through xencenter and removed all connections and virtual disk on the storage I want to remove, I had to click on the device “Local Storage 2” under the host and click the storage tab and make sure each was deleted. {VM Host SR Delete Process}
xe sr-list host=server1 #find and keep the uuid of the sr in my case "c2457be3-be34-f2c1-deac-7d63dcc8a55a"
xe pbd-list sr-uuid=c2457be3-be34-f2c1-deac-7d63dcc8a55a # find and keep the uuid of the pbd connectig sr to dom0 "b8af1711-12d6-5c92-5ab2-c201d25612a9"
xe pbd-unplug uuid=b8af1711-12d6-5c92-5ab2-c201d25612a9 #unplug the device from the sr
xe pbd-destroy uuid=b8af1711-12d6-5c92-5ab2-c201d25612a9 #destroy the devices
xe sr-forget uuid=c2457be3-be34-f2c1-deac-7d63dcc8a55a #destroy the sr
Now that the sr is destroyed, I can work on the raw disks on the dom0 and do some benchmarking on the speeds of differnt soft configurations from their.
Once I have made a change, to the structure of the disks, I can recreate the sr with a new name on top of whatever solution I come up with by :
xe sr-create content-type=user device-config:device=/dev/XXX host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage XXX on `cat /etc/hostname`” shared=false type=lvm
Replace the red XXX with what works for you
Most of the tests were me just running dd commands and writing the slowest time, and then what seemed to be about the average time in MB/s. It seemed like, the first time a write was done it was a bit slower but each subsequent time it was faster and I am not sure if that means when a disk is idle, it takes a bit longer to speed up and write? if that is the case then there are two scenarios, if the disk is often idle, the it will use the slower number, but if the disk is busy, it will use the higher average number, so I tracked them both. The idle disk issue was not scientific and many of my tests did not wait long enough for the disk to go idle inbetween tests.
The commands I ran for testing were dd commands
dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=1000 conv=fdatasync #for 1 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=10000 conv=fdatasync #for 10 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=100000 conv=fdatasync #for 100 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=1000000 conv=fdatasync #for 1000 mb
I wont get into the details of every single command I ran as I was creating the different disk configurations and environments but I will document a couple of them
Soft RAID 10 on dom0
dom0>mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2 --assume-clean dom0>mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd2 --assume-clean dom0>mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 --assume-clean dom0>mkfs.ext3 /dev/md10 dom0>xe sr-create content-type=user device-config:device=/dev/md10 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md10 on `cat /etc/hostname`” shared=false type=lvm
Dual Dom0 Mirror – Striped on DomU for an “Extended RAID 10”
dom0> {VM Host SR Delete Process} #to clean out 'Local storage md10' dom0>mdadm --manage /dev/md2 --stop dom0>mkfs.ext3 /dev/md0 && mkfs.ext3 /dev/md1 dom0>xe sr-create content-type=user device-config:device=/dev/md0 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md0 on `cat /etc/hostname`” shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/md1 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md1 on `cat /etc/hostname`” shared=false type=lvm domU> #at this point use Xen Center to add and attach disks from each of the local md0 and md1 disks to the domU (they were attached on my systems as xvdb and xvdc domU> mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/xvdb /dev/xvdc domU> mkfs.ext3 /dev/md10 && mount /data /dev/md10
Four disks SR from dom0, soft raid 10 on domU
domU>umount /data domU> mdadm --manage /dev/md10 --stop domU> {delete md2 and md1 disks from the storage tab under your VM Host in Xen Center} dom0> {VM Host SR Delete Process} #to clean out 'Local storage md10' dom0>mdadm --manage /dev/md2 --stop dom0>mdadm --manage /dev/md1 --stop dom0>mdadm --manage /dev/md0 --stop dom0>fdisk /dev/sda #delete partition and write (d w) dom0>fdisk /dev/sdb #delete partition and write (d w) dom0>fdisk /dev/sdc #delete partition and write (d w) dom0>fdisk /dev/sdd #delete partition and write (d w) dom0>xe sr-create content-type=user device-config:device=/dev/sda host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sda on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdb host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdb on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdc host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdc on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdd host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdd on `cat /etc/hostname`" shared=false type=lvm domU>mdadm --create /dev/md10 -l10 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf domU>mdadm --detail --scan >> /etc/mdadm/mdadm.conf domU>echo 100000 > /proc/sys/dev/raid/speed_limit_min #I made the resync go fast, which reduced it from 26 hours to about 3 hours domU>mdadm --grow /dev/md0 --size=max
Working with GB Large mysql dump files -splitting insert statements
Working with GB Large mysql dump files -splitting insert statements
Recently I had to restore a huge database from a huge MySQL dump file.
Since the dump file was had all of the create statements mixed with insert statements, I found the recreation of the database to take a very long time with the possibility that it might error out and rollback all of the transactions.
So I came up with the following script which processes the single MySQL dump file and splits it out so we can run the different parts separately.
This creates files that can be run individually called
- mysql.tblname.beforeinsert
- mysql.tblname.insert
- mysql.tblname.afterinsert
cat mysql.dump.sql| awk 'BEGIN{ TABLE="table_not_set"} { if($1=="CREATE" && $2=="TABLE") { TABLE=$3 gsub("`","",TABLE) inserted=false } if($1!="INSERT") { if(!inserted) { print $0 > "mysql."TABLE".beforeinsert"; } else { print $0 > "mysql."TABLE".afterinsert"; } } else { print $0 > "mysql."TABLE".insert"; inserted=true } } '
Setting up DRBD with OCSF2 on a Ubuntu 12.04 server for Primary/Primary
Setting up DRBD with OCSF2 on a Ubuntu 12.04 server for Primary/Primary
We run in a virtual environment and so we thought we would go with the virtual kernel for the latest linux kernls
We learned that we should NOT not in the case we want to use the OCFS2 distributed locking files system because ocfs2 did not have the correct modules so we would have had to doa custom build of the modules so we decided against it. we just went with the latest kernel, and would install ocfs2 tools from the package manager.
DRBD on the other hand had to be downloaded, compiled and installed regardless of kernel, here are the procedures, these must be run on each of a pair of machines.
We assume that /dev/xvdb has a similar sized device on both machines.
apt-get install make gcc flex wget http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gztar xzvf drbd-8.4.4.tar.gz cd drbd-8.4.4/ ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km make all
Connfigure both systems to be aware of eachother without dns /etc/hosts
192.168.100.10 server1 192.168.100.11 server2
Create a configuration file at /etc/drbd.d/disk.res
resource r0 {
protocol C;
syncer { rate 1000M; }
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
#requires a clustered filesystem ocfs2 for 2 prmaries, mounted simultaneously
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
cram-hmac-alg sha1;
shared-secret "sharedsanconfigsecret";
}
on server1 {
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.10:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.11:7788;
meta-disk internal;
}
}
configure drbd to start on reboot verify that DRBD is running on both machines and reboot, and verify again
update-rc.d drbd defaults
/etc/init.d/drbd start
drbdadm -- --force create-md r0
drbdadm up r0
cat /proc/drbd
at this point you should see that both devices are connected Secondary/Secondary and Inconsistent/Inconsistent.
Now we start the sync fresh, on server1 only both sides are blank so drbd should manage any changes from here on. cat /proc/drbd will show UpToDate/UpToDate
Then we mark both primary and reboot to verify everything comes back up
server1>drbdadm -- --clear-bitmap new-current-uuid r0 server1>drbdadm primary r0 server2>drbdadm primary r0 server2>reboot server1>reboot
I took a snapshot at this point
Now it is time to setup the OCFS2 clustered file system on top of the device first setup a /etc/ocfs2/cluster.conf
cluster:node_count = 2 name = mycluster node:ip_port = 7777 ip_address = 192.168.100.10 number = 1 name = server1 cluster = mycluster node:ip_port = 7777 ip_address = 192.168.100.11 number = 2 name = server2 cluster = mycluster
get the needed packages, configure them and setup for reboot, when reconfiguring, remember to put the name of the cluster you want to start at boot up mycluster run the below on both machines
apt-get install ocfs2-tools dpkg-reconfigure ocfs2-tools mkfs.ocfs2 -L mycluster /dev/drbd0 #only run this on server1 mkdir -p /data echo "/dev/drbd0 /data ocfs2 noauto,noatime,nodiratime,_netdev 0 0" >> /etc/fstab mount /data touch /data/testfile.`hostname` stat /data/testfile.* rm /data/testfile* # you will only have to run this on one machine reboot
So, everything should be running on both computers at this point when things come backup make sure everythign is connected.
You can run these commands from either server
/etc/init.d/o2cb status cat /proc/drbd