Author: Michael Blood
Solution -> Fatal error: Allowed memory size exhausted wp-includes/class.wp-dependencies.php on line 339
Recently one of our Managed WordPress Services clients came to me to describe a problem with a WordPress site they were working on.
If you need in depth help debugging an erro on your WordPress of PHP site, contact us.
They were receiving an error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 71 bytes) in /***REDACTED_PATH***/wp-includes/class.wp-dependencies.php on line 339
The client described that they had not done any kind of upgrades or changes to cause the issue and asked about whether we did any upgrades as part of the weekly site reviews we do under the Managed WordPress Service. I found in the weekly email report sent by another person in our office, that we had done some upgrades, however the file that was throwing the error (class.wp-dependencies.php) had not been updated. So I had to dig in deeper to find the root cause.
I ended up writing some debugging code that I placed directly in class.wp-dependencies.php that helped me to identify some of the causes. Because I found lots of other sites when googling that had the same error, I decided I would post the debugging code, in the case that it helps other users debug their issue. The issue ended up being that site had an enqueued script jquery which dependency on jquery-migrate which in turn had a dependency on jquery. This circular reference looped in the WP code until the server ran out of memory. (in this case it was about 128MB).
The theme and plugins had two locations which enqueued jquery scripts, one was enqueud BEFORE a dependency existed, then a dependency was created and the script was queued again. The code I wrote is intended for debuggin purposes, because the the issue could easily come from other issues, and I wanted to see the backtrace for exactly where the problematic scripts (jquery, jquery-migrate) were queued from.
Here is the output from my code:
MATRAEX Debugging Code This WordPress project has circular dependencies
Aborting script execution at /***PATH_REDACTED***/wp-includes/class.wp-dependencies.php:400
Debugging tip look for theme code that calls ‘enqueue’ on for items which already have dependencies)
If this error did not show, your script would have likely errored on line 339 when it ran out of memory from the circular debugging.
Most likely this is a plugin error!!
jquery has a dependency on jquery-migrate which has a circular dependecy on jquery
Backtrace of when jquery was enqueued:
Instance 0:
#0 WP_Dependencies->enqueue(jquery) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276] #1 wp_enqueue_script(jquery, http://dev.art4healing.org/wp-content/plugins/jquery-updater/js/jquery-3.1.1.min.js, , 3.1.1) called at [/***PATH_REDACTED***/wp-content/plugins/jquery-updater/jquery-updater.php:26] #2 rw_jquery_updater() #3 call_user_func_array(rw_jquery_updater, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524] #4 do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197] #5 wp_enqueue_scripts() #6 call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524] #7 do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555] #8 wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19] #9 require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572] #10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531] #11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45] #12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5] #13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75] #14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19] #15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]
Instance 1:
#0 WP_Dependencies->enqueue(jquery) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276] #1 wp_enqueue_script(jquery) called at [/***PATH_REDACTED***/wp-content/plugins/nextgen-gallery/nggallery.php:513] #2 C_NextGEN_Bootstrap->fix_jquery() #3 call_user_func_array(Array ([0] => C_NextGEN_Bootstrap Object ([_registry] => C_Component_Registry Object ([_searched_paths] => Array ( ....... #4 do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197] #5 wp_enqueue_scripts() #6 call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524] #7 do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555] #8 wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19] #9 require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572] #10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531] #11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45] #12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5] #13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75] #14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19] #15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]
jquery-migrate has a dependency on jquery which has a circular dependecy on jquery-migrate
Backtrace of when jquery-migrate was enqueued:
Instance 0:
#0 WP_Dependencies->enqueue(jquery-migrate) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276] #1 wp_enqueue_script(jquery-migrate, http://dev.art4healing.org/wp-content/plugins/jquery-updater/js/jquery-migrate-3.0.0.min.js, Array ([0] => jquery), 3.0.0) called at [/***PATH_REDACTED***/wp-content/plugins/jquery-updater/jquery-updater.php:32] #2 rw_jquery_updater() #3 call_user_func_array(rw_jquery_updater, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524] #4 do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197] #5 wp_enqueue_scripts() #6 call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524] #7 do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555] #8 wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19] #9 require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572] #10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531] #11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45] #12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5] #13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75] #14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19] #15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]
Add this following code to the query() function, just before the return statement withing the the “case ‘queue’ :” line in the file wp-includes/class.wp-dependencies.php. (in the version current as of this writing it is about line 387)
//BEGIN NON WP CODE - MATRAEX foreach($this->queue as $queued) $check[$queued]=$this->registered[ $queued ]->deps; foreach($check as $queued => $deparr) foreach($deparr as $depqueued) if($check[$depqueued] && in_array($queued,$check[$depqueued])) $recursivedependencies[$queued]= $depqueued; if($recursivedependencies) { $_SERVER[DOCUMENT_ROOT] = str_replace("/home","/data",$_SERVER[DOCUMENT_ROOT]); ob_start(); echo "
MATRAEX Debugging Code
blog post
This WordPress project has circular dependencies
Aborting script execution at “.__FILE__.”:”.__LINE__.”
Debugging tip look for theme code that calls ‘enqueue’ on for items which already have dependencies)
If this error did not show, your script would have likely errored on line 339 when it ran out of memory from the circular debugging.
Most likely this is a plugin error!!
“; global $enqueue_backtrace; foreach($recursivedependencies as $queued =>$depqueued) { echo ”
$queued has a dependency on $depqueued which has a circular dependecy on $queued” ; echo ”
Backtrace of when $queued was enqueued: “; foreach($enqueue_backtrace[$queued] as $k=>$lines) { echo ”
Instance $k:"; foreach(explode("\n",$lines) as $line) if(strstr($line,'enqueue')) echo "$line\n"; echo "“;
}
}
$out=ob_get_clean();
$out = str_replace($_SERVER[DOCUMENT_ROOT],”/***PATH_REDACTED***/”,$out);
$out = str_replace(“/plugins/”,”/plugins/”,$out);
$out = str_replace(“/themes/”,”/themes/”,$out);
echo $out;
exit;
}
//END NON WP CODE – MATRAEX
The full function code should look like this
In Addition, add the following code to the enqueue function just below the line with public function enqueue () at the time of this writing, that wa about line 378 of wp-includes/class.wp-dependencies.php.
//BEGIN NON WP CODE - MATRAEX global $enqueue_backtrace; ob_start(); debug_print_backtrace(); $enqueue_backtrace[$handles][] = ob_get_clean(); //END NON WP CODE - MATRAEX
The full function code should look like this
Wordfence – CPU issue with exhaustive scans – COMMANDDUMP
Wordfence has some default scans which run hourly. On many systems this works well. In at least one case we found a situation where Wordfence was running hourly scans on some VERY large libraries at the same time on multiple sites on the same server.
A fix was implemented for this, but in the time that it took us to recognize this issue, we came up with the following command which helped to kill the CPU hog so we could continue to use the WordPress websites.
kill `apachectl fullstatus|grep wordfence_doScan|awk '{print $2}'`
Some of the ways you can find out that the issue is occuring is by running some of these investigative commands
- apachectl fullstatus|grep wordfence – how many concurrent scans are running
- mysqladmin processlist|grep wf – the number of insert / update / select commands against Word Fence tables
- vmstat 1 – run a monitor on your system to see how active you are
- uptime – see your 1 , 5 and 10 minute loads
Command Dump – One line method to find errors in a large list of bind zone files
I have found need to go through a large list of bind zone files and find any that have errors.
This loop helps me identify them:
for a in `ls db.*.*|grep -v db.local.`; do named-checkzone localhost $a 2>&1 >/tmp/tmp; if [ "$?" != "0" ]; then echo "ERROR ON:$a"; cat /tmp/tmp; fi; done|more
- ls db.*.*|grep -v db.local.` – list each file that you would like to check (I listed all files with db.*.* and excluded any of them that started with db.local.)
- named-checkzone localhost $a 2>&1 >/tmp/tmp – run the check and save the results to a temp file
- if [ “$?” != “0” ]; then echo “ERROR ON:$a”; cat /tmp/tmp; fi; – if the command fails then print out the file name and the results
Building your iPhone App for iOS10 on XCode8 – NSPhotoLibraryUsageDescription
If you successfully build, archive and upload your iOS app using XCode8 against the iOS10 SDK, after you your application, you may see it in your iTunes Connect account for just a moment while it is processing, but then it disappears inexplicably.
Check your email from itunesconnect@apple.com, there are some additional requirements to your iOS10 app that were not built into the XCode warning and requirements.
In one case we had a UIIMagePickerController so when we uploaded our application to iTunesConnect through the Organizer window in XCode8, it appeared that iTunes COnnect accepted it with an “Upload Successful”. I s immediately able to see that the application was “Processing” under the “Activity -> All Builds” tab of iTunesConnect.
However, then it disappeared and I received a message from itunesconnect@apple.com which a couple of messages in it. One message was a warning prefaced by “Though you are not required to fix the following issues, we wanted to make you aware of them:” That is a topic for another post, however the important message was:
- This app attempts to access privacy-sensitive data without a usage description. The app’s Info.plist must contain an NSPhotoLibraryUsageDescription key with a string value explaining to the user how the app uses this data.
I opened my projects plist.info and added the following “<key>NSPhotoLibraryUsageDescription</key><string>This application requires access to users Photo Library if the user would like to set a profile image</string>”
When opening the plist.info page as a Property List in xcode, I was able to see that the Full Name of the key was “Privacy – Photo Library Usage Description”.
Once I had corrected the issue again, I built, archived and uploaded again and the build completed within iTunesConnect.
Enabling Xen VM auto start for 6.2- command line
Cytrix removed auto start from the easy to access options using XenCenter for 6.X servers.
However you can still run it.
First enable it on your pool
- xe pool-param-set uuid=UUID other-config:auto_poweron=true
Then run a command to get all of the VMs in your pool and turn auto power on for all of the VMs that are currently on.
- xe vm-list power-state=running |awk -F: ‘/uuid/ {print “xe vm-param-set uuid=”$NF” other-config:auto_power=true;”}’
This will give you a list of commands to enable auto_poweron for each of the running vm in your pool
Find all PHP Short Tag instances – COMMANDLINE
Occassionally we have run across web products which were developed using the PHP short open tag “<?” instead of “<?php”.
We could go into the php.ini file and update “short_open_tag” to “On”, however this ends up creating software which can not run on as many servers, and it is less transportable between servers.
The command below when run from the directory that houses all of your PHP files, will identify all of the files which use short open tags. You will then be able to make the changes to the files from <? to <?php
grep -rI '<?' -n . |grep -v '<?[(php)(xml)="]'
This command is running a first grep statement recursively in the current directory looking for any “<?”. The output of this is passed through another grep statement which then ignores any instances of “<?php”, “<?xml”, “<?=” and ‘<?”‘
Lets decompose:
- -r – means search the current (“.”) directory recursively
- -I means ignore binary files
- ‘<?’ search for all instances of ‘<?’
- -n – add the line number of the found code to help you find it faster
- -v – in the excludes anythign that matches in the second grep statement
- ‘ the regular expression then matches each of the items we want to ignore.
Note:
I have put in double quote(“) in the regular expression which ignores <?” because we have some php functions which loop through some XML code and tests for “<?”.
Command Dump – Extending a disk on XenServer with xe
To expand the disk on a XenServer using the command line, I assume that you have backed up the data elsewhere before the expansion, as this method deletes everything on the disk to be expanded
- dom0>xe vm-list name-label=<your vm name> # to get the UUID of the host = VMUUID
- dom0>xe vm-shutdown uuid=<VMUUID>
- dom0>xe vbd-list params=device,empty,vdi-name-label,vdi-uuid vm-name-label=<your vm name> # to get the vdi-uuid of the disk you would like to expand = VDIUUID
- dom0>xe vdi-resize uuid=<VDIUUID> disk-size=120GB #use the size that you would like to expade to
- dom0>xe vm-start uuid=<VMUUID>
Thats it on th dom0, now as your vm boots up, log in via SSH and complete the changes by deleting the old partition, repartitioning and making a new filesystem, I am going to do this as though the system is mounted at /data
- domU>df /data # to get the device name =DEVICENAME
- domU>umount /dev/DEVICENAME
- domU>fdisk /dev/DEVICENAME
- [d] to delete the existing partition
- [c] to create a new partition
- [w] to write the partition
- [q] to close fdisk
- mkfs.ext3 /dev/DEVICENAME
- mount /data
- df /data #to see the file size expanded
Looking for help with XenServer? Matraex can help.
Load problems after disk replacement on a ocfs2 and drbd system.
Notes Blurb on investigating a complex issue. resolved, however not with a concise description, notes kept in order to continue the issue in the case it happens again.
Recently, we had a disk failure on one of two SAN servers utilizing MD, OCFS2 and drbd to keep two servers synchronized.
We will call the two Systems: A and B
The disk was replaced on System A, which required a reboot in order for the system to recognize the new disk, then we ad to –re-add the disk to the MD. Once this happened, the disk started to rebuild. The OCFS and drbd layers did not seem to have any issue rebuilding quickly as soon as the servers rebuilt, the layers of redundancy made it fairly painless. However, the load on System B went up to 2.0+ and on System A up to 7.0+!
This slowed down System B significantly and made System A completely unusable.
I took a look at the many different tools to try to debug this.
- top
- iostat -x 1
- iotop
- lsof
- atop
The dynamics of how we use the redundant sans should be taken into should be taken into account here.
We mount System B to an application server via NFS, and reads and writes are done to System B, this makes it odd that System A is having such a hard time keeping up, it honly has to handle the DRBD and OCFS2 communication in order to keep synced (System A is handling reads and writes, where System B is only having to handle writes on the DRBD layer when changes are made. iotop shows this between 5 and 40 K/s, which seemed minimal.
Nothing is pointing to any kind of a direct indicator of what is causing the 7+ load on System A. the top two processes seem to be drbd_r_r0 and o2hb-XXXXXX, which take up minimal amounts of read and write
The command to run on a disk to see what is happening is
#iotop -oa
This command shows you only the commands that have used some amount of disk reas or write (-o) and it shows them cumulatively (-a) so you can easily see what is using the io on the system. From this I figured out that a majority of the write on the system, was going to the system drive.
What I found from this, is that the iotop, tool does not show the activity that is occuring at the drbd / ocfs2 level. I was able to see that on System B, where the NFS drive was connected to, that the nfsd command was writing MULTIPLE MB of information when I would write to the nfsdrive (cat /dev/zero> tmpfile), but I would see only 100K or something written to drbd on System B, and nothing on SystemA, however I would be able to see the file on System A,
I looked at the cpuload on Sysetm A when running the huge write, and it increased by about 1 (from 7+ to 8+) so it was doing some work , iotop just did not monitor it.
So i looked to iostat to find out if i would allow me to see the writes to the actual devices in the MD.
I ran
#iostat -x 5
So I could see what was being written to the devices, here is could see that the disk utilization on System A and System B was similar (about 10% per drive in the MD Array) and the await time on System B was a bit higher than System A. When I did this test I caused the load to go up on all servers to about 7 (application server, System A and System B) Stopping the write made the load time on the application server, and on System B go back down.
While this did not give me the cause, it helped me to see that disk writes on System A are trackable through iostat, and since no writes are occurring when I run iostat -x 5 I have to assume that there is some sort of other overhead that is causing the huge load time. With nothing else I felt I could test, I just rebooted the Server A.
Low and behold, the load dropped, writing huge files, deleting huge files was no longer an issue. The only think I could think was that there was a large amount of traffic of something which was being transferred back and forth to some ‘zombie’ server or something. (I had attempted to restart ocfs2 and drbd and the system wouldn’t allow that either which seems like it indicates a problem with some process being held open by a zombie process)
In the end, this is the best scenario I can use to describe the problem. While this is not real resolution. I publish this so that when an issue comes up with this in the future, we will be able to investigate about three different possibilities in order to get closer to figuring out the true issue.
- Investigate the network traffic (using ntop for traffic, tcpdump for contents, and eth for total stats and possible errors)
- Disconnect / Reconnect the drbd and ocfs2 pair to stop the synchronization and watch the load balance to see if that is related to the issue.
- Attempt to start and stop the drbd and ocfs2 processes and debug any problems with that process. (watch the traffic or other errors related to those processes)
COMMANDDUMP – Cloning a WordPress website for a Sandbox, Upgrade or Overhaul
Over the years, we have had clients ask us to create an exact copy of their current website (files, database and all) in a sandbox environment that would not affect their existing website. This typically involves setting up a temporary domain and hosting environment, and a new MySQL database, however they need them to be populated with an exact copy.
The needs they have varies:
- often it is to just be able to test a change within a disposable Sandbox,
- sometimes, they may want to do some sort of an upgrade, but they do not have a dedicated development or test environment,
- and commonly it is to start some sort of a site overhaul using the existing site’s pages, blog entries and design. In this case they will often migrate this site to their production site in the future
While a copy and paste seems like the simply way to do this, there is much more that must occur. This list below describes a list of all of the ones we have found so far
- Copy all of the files from the OLD WordPress root, to the NEW WordPress root
- Copy the entire database from Database A to Database B
- Update the NEW WordPress install to connect to Database B
- Update the Database B install wp_options to have the NEW url (if you skip this step, attempting to login to the NEW WordPress install will redirect you to the OLD WordPress install)
- Update all posts, pages and other entries which have absolute links to the OLD WordPress install to have absolute links to the NEW WordPress install. (if you do not change this, you may end up with embedded images and links which point back to the OLD WordPress install, sometimes this can be difficult to realize because the file structure is identical)
Once we realized this was going to be a common request and that we often need to do this from one directory on a server to another, we wanted to automate this process. We created a quick and dirty script which accomplishes all of the tasks of cloning the database and files, and then updating the contents of the database to the new location.
If you would like help with this process please contact us, Matraex would be happy to help you clone your WordPress website.
If you need a company to Manage your WordPress security and updates on a monthly basis please let us know here.
The script relies on some basic commands which should already be installed on your system, but you may want to confirm first
- sed
- mysql
- mysqldump
The script is one that you will run from the command line when you are within the existing WordPress website. You will run the command with parameters about the new WordPress website (The new directory, the new url, the new MySQL connection information.
The script does a couple of basic checks to make sure that the directory you are cloning to, does not already have a WordPress installation, and that the MySQL database is available but does not already have a WordPress install in the ‘default’ location.
It also uses the wp-config.php of the current WordPress installation to get connection information to the existing WP database and get the current URL.
If everything checks out the script
- copies all files from the old directory to the new directory
- dumps the existing database, manipulates a file to replace the old url with the new url
- imports the file into the new mysql database.
- updates the new directory wp-config.php to use the new MySQL connection information
File: wordpress_clone.sh
#!/bin/bash echo echo Usage: $0 1-NEW_DIR 2-NEW_URL 3-NEW_DB_HOST 4-NEW_DB_NAME 5-NEW_DB_USER 6-NEW_DB_PASSWORD if [ "$1" == "" ] || [ "$2" == "" ] || [ "$3" == "" ] || [ "$4" == "" ] || [ "$5" == "" ] || [ "$6" == "" ]; then echo echo "Invalid Parameters; please review usage"; echo "Exiting" echo exit fi NEW_DIR=$1 NEW_URL=$2 #type the url address that the new WordPress website is located at NEW_DB_HOST=$3 #TYPE the name of the database server for the NEW WordPress Install NEW_DB_NAME=$4 # Type the name of the NEW WordPress Database you want to connect to NEW_DB_USER=$5 #TYPE the username to connect to the NEW WordPress Database NEW_DB_PASSWORD=$6 #TYPE the password to connect to the NEW WordPress Database #this script assumes that you entered perfect information, it does not do any checking to confirm that any of the information you entered is valid before proceeding ORIG_DIR=`pwd` OLD_DIR=$ORIG_DIR #load all of the DB_variables from the old database into memory so we can dump it if [ ! -e wp-config.php ]; then echo echo "The current directory is not an existing WordPress installation" echo "Exiting" echo exit fi if [ ! -d $NEW_DIR ]; then echo echo "The new directory $NEW_DIR does not exist" echo "Exiting" echo exit fi cd $OLD_DIR source <(grep "^define('DB" wp-config.php |awk -F"'" '{print $2"=\""$4"\""}') EXISTING_NEW_DB=` mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD -N --execute='select now()' -h $NEW_DB_HOST $NEW_DB_NAME 2>/dev/null` if [ "" == "$EXISTING_NEW_DB" ]; then echo echo "New Database Connection Failed; A new blank database must be available in order to continue" echo "Exiting" echo exit fi EXISTING_NEW_URL=` mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD -N --execute='select option_value from wp_options where option_id=1' -h $NEW_DB_HOST $NEW_DB_NAME 2>/dev/null` if [ "" != "$EXISTING_NEW_URL" ]; then echo echo "There is already a WordPress database located at $NEW_DB_NAME: using '$EXISTING_NEW_URL'" echo "Exiting" echo exit fi OLD_URL=` mysql -u $DB_USER --password=$DB_PASSWORD -N --execute='select option_value from wp_options where option_id=1' -h $DB_HOST $DB_NAME` if [ "" == "$OLD_URL" ]; then echo echo "The database configuration in wp-config.php for the current WP install does not have a valid connection to the database $DB_NAME $DB_USER:$DB_PASSWORD@$DB_HOST" echo "Exiting" echo exit fi echo "from:$OLD_URL" echo "to :$NEW_URL" cp -ar $OLD_DIR/. $NEW_DIR/. TMPFILE=$(mktemp /tmp/`basename $0`.XXXXXXXXX) echo "Dumping Database " mysqldump -h $DB_HOST --extended-insert=FALSE -c -u $DB_USER --password=$DB_PASSWORD $DB_NAME >$TMPFILE echo Temp DB File:$TMPFILE sed -e"s|$OLD_URL|$NEW_URL|g" -i $TMPFILE cat $TMPFILE | mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD $NEW_DB_NAME rm $TMPFILE cd $ORIG_DIR cd $NEW_DIR sed -e"s/define('DB_USER', '[A-Za-Z0-9]*/define('DB_USER', '$NEW_DB_USER/" -i wp-config.php sed -e"s/define('DB_PASSWORD', '[A-Za-Z0-9]*/define('DB_PASSWORD', '$NEW_DB_PASSWORD/" -i wp-config.php sed -e"s/define('DB_HOST', '[A-Za-Z0-9\.]*/define('DB_HOST', '$NEW_DB_HOST/" -i wp-config.php sed -e"s/define('DB_NAME', '[A-Za-Z0-9]*/define('DB_NAME', '$NEW_DB_NAME/" -i wp-config.php echo "Wrote DB Changes to $NEW_DIR/wp-config.php"
Resolving net::ERR_INCOMPLETE_CHUNKED_ENCODING in Chrome
We have a page where there are 10 categories displaying like accordions so when you click on them they open and display the inputs for each category and allow them to be edited and saved. We had a button to Edit All of the accordions which would call the onclick function for each of the Categories/Accordions and open them all at once. When clicked individually there was no issue but when the Edit All button was clicked the page would lock up and become unresponsive and we would get a blank white space from half the page down. If you waited long enough you would get the net::ERR_INCOMPLETE_CHUNKED_ENCODING error when watching it in the console.
When googling for solutions to this issue many mentioned a newer feature in Chrome labeled: Prefetch resources to load pages more quickly. Since requiring users to turn off this option was not a valid solution I looked into setting a meta tag in the header to turn this option off. This did not resolve the issue. Some possible solutions also mentioned turning off a setting for some Anti Virus programs but this was also not a valid solution since we could not require it from end users. I also discounted setting the content-length for the headers since at this point the headers had already been sent.
Another possible cause for the issue was that the server might not be sending the terminal 0-length chunk. I found one area within the code that had an ob_start() without an end. Tried adding several flavors of ob_end and even ob_flush and this did not resolve the issue.
Next I looked into determining how many of the accordions could be clicked at the same time before the error was caused. It turned out that 9 could be clicked without the error and it was not any specific accordion causing the error. I also determined that the process of opening them started slowing down around 6-7 at the same time. I wrote a loop calling a function using setTimeout that would set a class groupsopening on a subset of the accordions needed to be opened that did not already have that class and just open them. Then I would bump up the setTimeout time and run the function again and loop until all of the accordions were open. I ran into issues with this method where the accordions would all still try and open at the same time. I believe the issue was related to calling setTimeout with the same function but I’m not completely sure. I had that feeling based on some research into the setTimeout function.
Next I tried using the setInterval method and setup a function to be called every 200ms. I added a global variable to keep track of how many times the function had been called and cleared the interval if it was over 50 because I determined if it ran this many times it would be in an infinite loop. Next I added a check for tables inside each of the accordions (which would only be added after the accordion was open) with the new groupsopening class and returned if some still needed to be opened. Then added code to pull accordions without the new class. If there were no accordions without the new class then we were done and I cleared the interval. If there were still some accordions without the class I marked another subset with the new class and called the click function to open them. This resolved the issues and prevented the error.