Author: Michael Blood
WordPress Website Check Instant tool 3 feedback areas
When working with WordPress websites, there are three main areas we assess within the first few seconds, speed, security and exposure. To help us quickly do this, we built a tool, WordPress Website Check Instant tool 3 feedback areas, which pulls this information quickly and puts it into a single interface.
WordPress Website Check – https://www.matraex.com/website-check.php
While each of the three areas of speed, security and exposure go much deeper than this small scan, we are able to see some very important metrics very quickly.
Three Checks
Speed
We can see the download speed, the size of your home page, the number of external CSS files and the number of external scripts. This helps us to see how well the site has been optimized. Typically wordpress websites are made up of a Theme with enabled capabilities, as well as a number of plugins. Each of the capabilities and plugins will often have their own stylesheets and script files which can add up to a bloated website.
Security
WordPress websites are the subject of frequent hack attempts. Website scanners quickly find WordPress sites that have their admin and login scripts exposed. The scanner identifies the scripts. If the scripts do not block access after a number of failed attempts (with the username ‘admin’ and then with a random username) the script fails.
Exposure
WordPress websites often publish their version number as well as details about which plugins it uses. This information ideally is private and if possible should be kept private. When vulnerabilities in these tools are found, this is an advertisement to exploit your site.
The results can be saved and a link will be sent to your email so you can permanently have access to the results
The tool is our way of checking a site within seconds, and we offer it free. Our hope is that others find this useful ad will come to Matraex, Inc for their Website Development, Design, Hosting and Security needs.
Matraex, Inc
208.344.1115
Website Performance Assessment Tool
Webpage performance is important and there are a plethora of tools out there that allow you to see your website’s performance.
The tools give a large amount of information and website owners can use that information to make assessments and improvements.
As we use these tools to help our clients improve the performance of their websites, we found a couple of needs:
- We needed a tool to quickly compare the results between changes
- We needed somewhere we could go to quickly lookup results next time that we evaluated the performance
So, we built the Website Performance Assessment Tool (matraex.com/website-performance).
This tool allows us to:
- Enter a web page url
- Link to two third party performance tools(pingdom,PageSpeed)
- Enter the results and
- Save
The numeric results are then stored in a table and as we make changes we can see how performance improves.
With a couple of enhancements (the ability to track multiple urls and an improved User Interface) we decided to make this tool public and encourage others to use it.
The primary benefits we see are:
- The ability for non technical users to track their site performance
- Website owners can track and evaluate changes made by their website developer
- Website developers and website owners can use the tool to communicate performance expectations and results
Here is one example of how it can work:
- A website owner opens the website assessment performance tool and enters their website url
- They use the quick links to generate metrics for a Performance Grade, Number of Requests, Load time, Page size and a Desktop and Mobile Grade
- They enter the metrics into the tool and click Save
- They notice that the total Load time is more than 4 seconds so they ask their website developer to improve the results.
- Specifically they describe they want:
- the Load Time to decrease to less than 2 seconds and
- the Desktop and Mobile Grade should improve to better than 85% each
- The developer makes changes and tells the owner the changes are complete.
- The owner opens the Performance Tooland re enters the metrics and evaluates whether it is true.
- One month later the owner comes back to the tool and checks again and can see the history and whether performance has degraded.
Utility – Bulk Convert the Unix Timestamp in log messages To a Readable Date
DNS Nameserver Response Comparison Tool
Custom network tools we use at Matraex
MySQL to update Canada zipcode from 6 characters with no space to have a space
update tablename set zipcode = concat(substr(zipcode,1,3) , ' ', substr(
zipcode
,4,3) ) where 0=0 and zipcode regexp '^[A-Z][[:digit:]]' and substr(zipcode,4,1) <> ' ' and length(zipcode) = 6
A New Office for 2017
Hacking a corrupt VHD on xen in order to access innodb mysql information.
A client ran into a corrupted .vhd file for the data drive for a xen server in a pool. We helped them to restore from a backup, however there were some items that they had not backed up properly, our task was to see if we could some how restore the data from their drive.
First, we had to find the raw file for the drive. To do this we looked at the Local Storage -> General tab on the XenCenter to find the UUID that will contain the failing disk.
When we tried to attach the failing disk we get this error
Attaching virtual disk 'xxxxxx' to VM 'xxxx' The attempt to load the VDI failed
So, we know that the xen servers / pool reject loading the corrupted vhd. So I came up with a way to try and access the data.
After much research I came across a tool that was published by ‘twindb.com’ called ‘undrop tool for innodb’. The idea is that even after you drop or delete innodb files on your system, there are still markers in the file system which allow code to parse what ‘used’ to be on the system. They claimed some level of this worked for corrupted file systems.
- UnDrop Tool for innodb
The documentation was poor, and it took a long time to figure out, however they claimed to have 24-hour support, so I thought I would call them and just pay them to sort out the issue. They took a while and didn’t call back before I had sorted it out. All of the documentation he did have showed a link to his github account, however the link was dead. I searched and found a couple other people out there that had forked it before twindb took it down. I am thinking perhaps they run more of an service business now and can help people resolve the issue and they dont want to support the code. Since this code worked for our needs, I have forked it so that we can make it permanently available: https://github.com/matraexinc/undrop-for-innodb
First step was for me to copy the .vhd to a working directory
# cp -a 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd /tmp/restore_vhd/orig.vhd
#cd /tmp/restore_vhd/
#git clone https://github.com/matraexinc/undrop-for-innodb
#cd undrop-for-innodb
#apt-get install bison flex
#apt-get install libmysqld-dev #this was not mentioned anywhere, however an important file was quitely not compiled without it.
#mv * ../. #move all of the compiles files into your working directory
#cd ../
#./stream_parser -f orig.vhd # here is the magic – their code goes through and finds all of the ibdata1 logs and markers and creates data you can start to work through
#mv pages-orig.vhd pages-ibdata1 #the program created an organized set of data for you, and the next programs need to find this at pages-ibdata1.
#./recover_dictionary.sh #this will need to run mysql as root and it will create a database named ‘test’ which has a listing of all of the databases, tables and indexes it found.
This was where I had to start coming up with a custom solution in order to process the large volume of customer databases. I used some PHP to script the following commands for all of the many databases that needed to be restored. But here are the commands for each database and table you must run a command that corresponds to an ‘index’ file that the previous commands created for you, so you must loop through each of them.
select c.name as tablename
,a.id as indexid
from SYS_INDEXES a
join SYS_TABLES c on (a.TABLE_ID =c.ID)
This returns a list of the tables and any associated indexes, Using this you must generate a command which
- generates a create statement for the table you are backing up,
- generate a load infile sql statement and associated data file
#sys_parser -h localhost -u username -p password -d test tablennamefromsql
This generates the createstatement for the tables, save this to a createtable.sql file and execute it on your database to restore your table.
#c_parser -5 -o data.load -f pages-ibdata1/FIL_PAGE_INDEX/00000017493.page -t createtable.sql
This outputs a “load data infile ‘data.load’ statement, you should pipe this to MYSQL and it will restore your data.
I found one example where the was createstatement was notproperty created for table_id 754, it appears that the sys_parser code relies on indexes, and in one case the client tables did not have an index (not even a primary key), this make it so that no create statement was created and the import did not continue. To work around this, I manually inserted a fake primary key on one of the columns into the database
#insert into SYS_INDEXES set id=1000009, table_id = 754, name=PRIMARY, N_FIELDS=1, Type=3,SPACE=0, PAGE_NO=400000000
#insert into SYS_FIELDS set INDEX_ID=10000009, POS=0, COL_NAME=myprimaryfield
Then I was able to run the sys_parser command which then created the statement.
An Idea that Did not work ….
The idea is to create a new hdd device at /dev/xvdX create a new filesystem and mount it. The using a tool use as dd or qemu-img , overwrite the already mounted device with the contents of the vhd. While the contents are corrupted, the idea is that we will be able to explore the corrupted contents as best we can.
so the command I ran was
#qemu-img convert -p -f vpc -O raw /var/run/sr-mount/f40f93af-ae36-147b-880a-729692279845/3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd/dev/xvde
Where 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd is the name of the file / UUID that is corrupted on the xen DOM0 server and f40f93af-ae36-147b-880a-729692279845 is the UUID of the Storage / SR that it was located on
The command took a while to complete (it had to convert 50GB) but the contents of the vhd started to show up as I ran find commands on the mounted directory. During the transfer, the results were sporadic as the partition was only partially build, however after it was completed, I had access to about 50% of the data.
An Idea that Did not work (2) ….
This was not good enough to get the files the client needed. I had a suspicion that the qemu-img convert command may have dropped some of the data that was still available, so i figured I would try another, somewhat similar command, that actually seems to be a bit simpler.
This time I created another disk on the same local storage and found it using the xe vdi-list command on the dom0.
#xe vdi-list name-label=disk_for_copyingover
this showed me the UUID of this file was ‘fd959935-63c7-4415-bde0-e11a133a50c0.vhd’
i found it on disk and I executed a cat from the corrupted vhd file into the mounted vhd file while it was running.
cat 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd > ../8c5ecc86-9df9-fd72-b300-a40ace668c9b/fd959935-63c7-4415-bde0-e11a133a50c0.vhd
Where 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd is the name of the file / UUID that is corrupted on the xen DOM0 server fd959935-63c7-4415-bde0-e11a133a50c0.vhd is the name of the vdi we created to copy over
This method completely corrupted the mounted drive, so I scrapped this method.
Next up:
Try some file partition recovery tools:
I started with testdisk (apt-get install testdisk) and ran it directly againstt the vhd file
testdisk 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd
Commanddump – remove all kernel header packages
Servers fill up with kernels that are not in use.
Use this single command to remove them on ubuntu / debian.
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get purge -y
Automate and Measure Everything
Humans introduce errors as we perform manual tasks. The following list includes only some of the ways that human nature causes issues:
- Communication: We describe something incorrectly or we interpret something differently than someone intended. This applies to written and verbal communication
- Input: We type something wrong and dont correct it (the “typo”)
- Sequence: We forget, change the order and add steps
- Timing: We take longer or less time than we should have
Some people are excellent, others are average, and some are downright horrible in some or all of the areas above.
Basically as humans our inconsistency leaves much room for improvement when performing any kind of task. This increases as the complexity, duration and number of steps in a task or process grows.
Computer software provides solutions to those problems when the issues above are considered.
- Communication: Software can formalize the exact messaging to and from humans, requiring communication to be explicitly acknowledge and sending consistent reminders if not.
- Input: Both input and output can be formalized, exact data is transferred between systems
- Sequence: The order that steps are completed can be done in the exact same order, every time, without fail, rigorous rules can be applied to ensure this happens without fail.
- Timing: The exact same amount of time can be guaranteed between steps, consistently without fail
Matraex builds software to that automates redundant and error prone task and improves on these issues.
Even with software that addresses these issues it is important at some level that a human can confirm that items are being completed. Our software typically contains reports and displays dashboards which gives managers and companies the ability to quickly identify that issues are dealt with correctly. Our favorite dashboards give users the ability to see exactly what they need to do next.
If you would like to explore custom software that helps you solve any of these issues – call Matraex at 208.344.1115