Category: Uncategorized
What are the Benefits of App Prototyping?
App prototyping offers plenty of benefits compared to full app development. It’s a great way to get your ideas from paper to a functioning mockup.
The phrase, “a picture is worth a thousand words” is an apt analogy to comparing an app prototype to the idea you have for an app. When you create a prototype, it helps bring the idea to life. Let’s look at some of the core benefits of app prototyping.
6 Benefits of App Prototyping
1. Highly Cost-Effective
Compared to developing your app completely, app prototyping is a much cheaper option. It’s cost-effective because you gain a working mockup to help show off the idea without investing in complete app development.
Whether you’re starting a new business or trying to create an app for your existing business, investing in app prototyping offers a less expensive solution.
2. More Clarity
You might have an idea for an app, but you’re not sure how it will work or look. With app prototyping, you get the ability to see your app function and gain clarity throughout the process. It works as the ultimate visual aid to help you gain feedback from others and make adjustments to the design and functionality.
3. Gain Feedback
It’s hard to send a sketch to someone and get their feedback on an app. Sure, they might be able to give you some pointers on the colors or the design, but they cannot see any functionality or see the app on an actual screen.
When you create a prototype of your app, you can gain feedback from potential investors and customers. This feedback can be priceless as it can help you make changes to better suit your target market.
4. Perfect the Design before Development
When you choose app prototyping, you get the ability to test, analyze, adjust, and repeat multiple times. You can try out different functions and figure out what works best for the end-user.
Instead of launching an app and releasing update after update, you can fix issues and adjust functionality during the prototyping stage.
5. Provide Something Tangible for Investors
It’s hard enough to gain funding when you cannot show sales yet. Investors need to understand your idea and how it works to solve a problem or provide convenience.
With app prototyping, you’ll be able to provide something tangible for investors. They can see how the app works and what to expect with the end results once the app is developed.
6. Validate the User Experience
One of the most important factors for any app is the user experience. Without a working mockup, it’s hard to validate the user experience. App prototyping allows you to find out if your app provides a good user experience or needs some work.
When you choose app prototyping instead of full app development, you gain access to a more cost-efficient way to bring your idea to life. Many benefits come with creating a prototype first and developing the app later. These are just a few of the core benefits you’ll gain from the app prototyping process.
3 Reasons Mobile App Developers Need Prototypes
It all starts with an idea for mobile app developers. The idea is the beginning of the journey towards watching an app become functional and solve a problem for the end-user.
As you go through the development phases, you’ll need to take your idea and bring it to life. This means figuring out how the app will look, feel, and function. With app prototyping, you gain a better, more cost-effective solution to bring your idea to life.
There are many reasons why mobile app developers need prototypes. Let’s look at a few of the most common reasons.
Saves Time and Money
The biggest reason anybody chooses to do something is to save time or save money, especially when developing a new idea. App prototyping helps with both. You will be able to bring your idea to life without going through the full development, which takes up time and costs money.
Consider the worst-case scenario. You hire a full app development team and they work around the clock to bring your vision to life. Then, once the app is completed, you realize the design isn’t as appealing as you hoped or it simply doesn’t work as you have envisioned. Now, you have to go back and fix things, which cost more money and more time.
With app prototyping, you can see your idea come to life for a lower cost and it will be completed much faster. Once you see the app and the functionality, it’s much easier to make changes before you launch the app to the public.
Provides a Way to Vet the Idea First
You’ll gain a better way to fully vet your idea before sending it into the expensive and time-consuming development phases with app prototyping. Instead of releasing updates after you’ve launched, you’ll be able to avoid costly issues and fix problems early on.
Prototypes can be tested and adjusted multiple times before being fully developed. You don’t have to go into the coding stages before you see how things will work and look. This means you get to vet your app before it’s ever really created.
Gain Incredible Feedback
If you want to know what potential customers and investors will think before spending the money to fully develop your app, a prototype is for you. With an app prototype, you can gain feedback from your target market and potential investors, which can be invaluable.
Some investors might have contingencies, which include specific changes they believe will benefit the performance of your finished app. If you’ve already developed the app, these changes can be very expensive and time-consuming to make. With a prototype, you’ll likely be more open to the feedback of investors and potential customers since the changes will only incur minimal costs.
There are several reasons why mobile app developers need prototypes. These three reasons are just a few of the main ones. You will also gain many benefits and the ability to see your idea come to life without fully investing in the entire development of an app.
App Prototyping vs. Full App Development
When it’s time to create an app for your startup business or for an existing business, app prototyping offers an excellent starting point. The prototyping stage allows you to make changes before you get to the final stages of full app development.
It’s important to compare app prototyping and full app development before you move forward. Developing your app won’t be cheap and you want to make sure you invest your funds wisely. Let’s look at app prototyping vs. full app development to make it a bit easier to see the differences.
What is App Prototyping?
Taking an idea for an app and showing its value can be done with app prototyping. It gives you a clean tool to use when pitching investors without a completed final product.
App prototyping is basically a model of the app you want to create. It gives you the ability to test the app before you spend time and money on full app development. Many companies use app prototyping to show interested investors the concepts and ideas before complete development.
When you choose app prototyping, the cost will be lower and you will be able to see the functionality of the app. This stage in development allows for changes to the design, the functionality, and pretty much anything else involved with the app. It takes your idea from a sketch on paper to an actual visualization of the app complete with functionality through a working layout and design.
What is Full App Development?
When you choose full app development, you might have already gone through the prototyping stage in development. If not, you will likely be hiring a team of developers and going through the phases of developing, testing, and launching your app.
This is great if you already know what you want and you have the funds to support full app development. However, if your app is simply an idea and needs to be tweaked along the way, app prototyping offers a less expensive way to get a functioning prototype ready.
Benefits of App Prototyping
1. Very Cost-Efficient
When you start your project with app prototyping, you’ll be using a more cost-efficient option. The process gives you the ability to solve problems during the beginning stages of the process instead of waiting until the end.
It’s easier and less costly to make changes during the testing phase compared to making changes to a nearly finished product.
2. Ability to Pitch Investors
Maybe you have an idea, but you need funding. Using app prototyping allows you to create something you can show potential investors before going through the more expensive full app development process.
3. Exploration and Improvement
When you develop an app, you might find ways it can work better or you might want to change the functionality along the way. App prototyping offers an easier way to make changes throughout development before you’ve paid for a nearly complete app.
When looking at app prototyping vs full app development, for many, app prototyping offers a better option. You likely have the goal of developing a completed app, which app prototyping can lead to. However, starting with a cost-efficient solution like app prototyping offers a better solution for most companies looking to develop a new app.
AWS RDS SSL/TLS Certificate Upgrade
It is best to start the certificate upgrade process by first testing it on a copy of the database to ensure that if there is an issue it will not effect live users.
The creation of a copy of the RDS database and Upgrade for the certificate process for the database on AWS is as follows:
Step 1. Overview: Download the root certificate and move it to your application and then setup a script on your live server to test the connection to a test instance of the RDS DB without effecting end users. (Instructions for this section are for php but you could use other languages to achieve the same thing.)
Download the root cert that works for all AWS Regions from https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
Under the heading Using SSL/TLS to Encrypt a Connection to a DB Instance click on the link https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem (Only Download the Intermediate certificate for the region where the servers are located if the previous chain certificate does not work. Make sure to choose from the column with the newest cert.)
Use filezilla or some other method of moving the certificate to the development server. On one server we moved the certificate to /etc/ssl/certs/ and on another we moved it to /data/webs/[base file for site]/certs/.
(The following instructions were for a server using opsworks where we have dynamic scripts setup to create files on the instances through a deployment to a specific stack. If you are not using opsworks you could just move the file strait to your production server in the correct location and setup the permissions manually.)
OPSWorks Server: Modify the build scrip for the apache recipe to grab the version of the file on the development server, recreate it, and put it on the aws server in the right location.
We added a section to the file like the following:
file ‘/etc/ssl/certs/rds-ca-2019-root.pem’ do
content <<-EOH<?=file_get_contents(“/etc/ssl/certs/rds-ca-2019-root.pem”)?>
EOH
mode ‘0644’
owner ‘root’
group ‘root’
end
Run the build script from the custom developer opsworks area and verify the file shows up in the body of the apabche.rb recipe. Then deploy the recipe so it will be ran on the production stack on aws and create the file in the correct location. Go on one of the instances on the production stack and verify the file has been created.
(End of OPSWORKS Server specific section.)
Modify your database connection for your site (ours was in a specific function controlling the connection and making it available to the rest of our pages) to change the connection string for a certificate requirement.
Mysql database specific section:
For mysql databases we modified the line:
$dblink = mysqli_connect($server, $user, $pass);
To instead use:
$certpath=trim($_SERVER[‘DOCUMENT_ROOT’].’/certs/rds-ca-2019-root.pem’);
$dblink = mysqli_init();
$dblink->options(MYSQLI_OPT_SSL_VERIFY_SERVER_CERT, true);
$dblink->ssl_set(NULL, NULL, $certpath, NULL, NULL);
$dblink->real_connect($server, $user, $pass, $name);
We first left this new section of code commented out and modified our db_connect_single function so we could include a global variable testcertrequirements that was an array containing certpath and host. If this global variable was set we would use the data to override the default connection data so we could test the connection with a developer user without causing issues for our other users on the system. This is a snippet of the code for the connection function: (d function calls just display info for developers)
function db_connect_single($name, $user, $pass, $server='')
{
global $testcertrequirements;
if($_GET[debug]||$_GET[db_connect_debug])
d("db_connect_single($name, $user, $pass, $server='')");
if(!$server)
$server='localhost';
if (defined("ENVIRONMENT_CURRENT"))
$currentenv=ENVIRONMENT_CURRENT;
if (
(strtolower($currentenv)=='prod')
&& $testcertrequirements
&& is_array($testcertrequirements)
&& trim($testcertrequirements['host'])
&& trim($testcertrequirements['certpath'])
&& file_exists(trim($testcertrequirements['certpath']))
) //Method for testing new certificate requirements without effecting current users on AWS.
{
$server=trim($testcertrequirements['host']); //AWS end point for new test db instance spun up from snapshot in RDS and that is setup with the new certificate.
d('$testcertrequirements[certpath]',trim($testcertrequirements['certpath']));
d('$server',$server);
$dblink = mysqli_init();
$dblink->options(MYSQLI_OPT_SSL_VERIFY_SERVER_CERT, true);
$dblink->ssl_set(NULL, NULL, trim($testcertrequirements['certpath']), NULL, NULL);
$dblink->real_connect($server, $user, $pass, $name);
}
/*
elseif(strtolower($currentenv)=='prod') //Default on aws is to use new cert since db updated to new cert.
{
$certpath=trim($_SERVER['DOCUMENT_ROOT'].'/certs/rds-ca-2019-root.pem');
$dblink = mysqli_init();
$dblink->options(MYSQLI_OPT_SSL_VERIFY_SERVER_CERT, true);
$dblink->ssl_set(NULL, NULL, $certpath, NULL, NULL);
$dblink->real_connect($server, $user, $pass, $name);
}
*/
else
{
if($testcertrequirements)
{
d('<span style="color:red">DID NOT USE TEST CERT REQUIREMENT FOR CONNECTION</span>');
d('using default connection and not dynamic one.');
d('$currentenv',$currentenv);
d('connecting using server:'.$server.' and not test certificate requirement');
d('$testcertrequirements',$testcertrequirements);
if(!file_exists(trim($testcertrequirements['certpath'])))
d('Missing cert file:',trim($testcertrequirements['certpath']));
}
$dblink = mysqli_connect($server, $user, $pass);
}
(End Mysql specific section)
Postgresql database specific section:
For postgresql databases we modified the lines:
$con_str = “host=$hostname port=$port dbname=$dbname user=$user password=”.$password;
timetrack(db_connect, $con_str);
$dbconn = pg_connect($con_str);
To instead use:
$addcertrequirement=””;
if(is_aws())
$addcertrequirement=” sslmode=’verify-full’ sslrootcert=’/etc/ssl/certs/rds-ca-2019-root.pem'”;
$con_str = “host=$hostname port=$port dbname=$dbname user=$user password=”.$password.$addcertrequirement;
$dbconn = pg_connect($con_str);
We first left this new section of code commented out and modified our db_connect function so we could include a global variable testcertrequirements that was an array containing certpath and host. If this global variable was set we would use the data to override the default connection data so we could test the connection with a developer user without causing issues for our other users on the system. This is a snippet of the code for the connection function: (d function calls just display info for developers)
global $testcertrequirements;
$addcertrequirement="";
/*
if(is_aws())
$addcertrequirement=" sslmode='verify-full' sslrootcert='/etc/ssl/certs/rds-ca-2019-root.pem'";
*/
if($testcertrequirements && is_array($testcertrequirements) && trim($testcertrequirements['host']) && trim($testcertrequirements['certpath'])) //Method for testing new certificate requirements without effecting current users on AWS.
{
d('$testcertrequirements',$testcertrequirements);
$addcertrequirement=" sslmode='verify-full' sslrootcert='".$testcertrequirements['certpath']."'";
if(!is_aws())
$addcertrequirement=" sslmode='prefer' sslrootcert='".$testcertrequirements['certpath']."'";
else
$hostname=trim($testcertrequirements['host']); //AWS end point for new test db instance spun up from snapshot in RDS and tht is setup with the new certificate.
d('$addcertrequirement',$addcertrequirement);
d('$hostname',$hostname);
}
//decide which server to connect to based on the environment
$con_str = "host=$hostname port=$port dbname=$dbname user=$user password=".$password.$addcertrequirement;
timetrack(db_connect, $con_str);
$dbconn = pg_connect($con_str); //@pg_connect($con_str); use the @ to fix any errors
(End Postgresql specific section)
After setting up the database connection functions created a file to be able to setup and test the connection as a specific user on production without effecting all the other users on the system. Code snippet for our test file:
global $testcertrequirements;
$testcertrequirements=array();
if($_POST['testconnection'])
{
if(!trim($_POST['host']))
set_message('A hostname is requried to test the connection','error');
if(!trim($_POST['certpath']))
set_message('A path including the filename to the new certificate is required to test the connection with the new certificate','error');
if(!has_message('error'))
{
$testcertrequirements['host']=$_POST['host'];
$testcertrequirements['certpath']=$_POST['certpath'];
d('$testcertrequirements',$testcertrequirements);
db_connect();
$sql="[ADD AN SQL SELECT STATEMENT HERE FOR A CORE DATABASE TABLE THAT CONTAINS INFORMATION IN YOUR DATABASE]
";
$logingcheckqry=db_query($sql);
d('$logingcheck sql',$sql);
d('$logingcheckqry',$logingcheckqry);
if($logingcheckqry)
set_message('Connection appears to be successful','success');
}
$testcertrequirements=array();
db_connect(); //Changing back connection to default for sql in footer.
}
display_messages('error');
$defaulttesthost=config_var('dbserver'); //Testing db instance endpoint already setup for new certificate.
if($defaulttesthost && !trim($_POST['host']))
$_POST['host']=$defaulttesthost;
$defaulttestcertpath='/etc/ssl/certs/rds-ca-2019-root.pem';
if($defaulttestcertpath && !trim($_POST['certpath']))
$_POST['certpath']=$defaulttestcertpath;
d('post',$_POST);
?>
<h3>
Test New DB Certificate for SSL
</h3>
<form id="testnewcertconnection" method="POST" enctype="multipart/form-data">
<table style='width:25%'>
<tr>
<td>
Host
</td>
<td>
<input type='text' name='host' id='host' value='<?=$_POST['host']?>' />
</td>
</tr>
<tr class='bgmint'>
<td>
Certificate Path
</td>
<td>
<input style='width:95%' type='text' name='certpath' id='certpath' value='<?=$_POST['certpath']?>' />
</td>
</tr>
<tr>
<td colspan=2>
<input type="submit" value="Test Connection" name="testconnection">
</td>
</tr>
</table>
</form>
<?
include_once('footer.php');
Next migrate this code for the database connection and run the test file on the production site with the existing production db to verify you can select data from the database. This will showed that the test file and the connection is setup correctly.
Step 2. Overview: Create a new instance of the existing database from a snapshot to test the upgrade process so you can verify it works on a clone of the existing database with the same data.
Start by logging into the AWS Console then clicking on Services>Database> RDS>Snapshots.
Choose the newest snapshot or create a new snapshot of the database and choose it. Copy the KMS key ID from the snapshot details page and also make a note of the DB Storage. Click Actions> Restore Snapshot.
Select the DB Instance Class that most closely resembles the DB Storage you noted earlier for the existing snapshot of the database. Type in a DB Instance Identifier (Identifier for the new database) like Test-DB-Cert-Upgrade.
Under Encryption Click on Master Key and select enter a key ARN. (If there is no key already shown.) Use the key from the details for the existing database. So if the key from the snapshot was efLR5721-a243-4067-bb80-fbecd491dec0 and the region was us-west-2 the key ARN would be:
arn:aws:kms:us-west-2:[put your console login key here]:key/efLR5721-a243-4067-bb80-fbecd491dec0
All the other options you should be able to leave the same. Next Click restore DB Instance. Once the new instance from the snapshot is created then click on Databases from the main menu. Click on the main instance you just made and make a note of the end point for this test database.
If you have custom Network Security groups you will need to update them now on the database or you may not be able to connect to the new test database instance. Start a timer so you can get an estimate on how long the upgrade process will take. Click Modify and in the Network & Security section Click on the drop down for the security group and choose your custom group. In this same section set the Certificate authority drop down to the new certificate (In our case it was rds-ca-2019). Click Continue and select the radio button for the option to apply the changes now (Apply Immediately) and then run the modification.
Step 3. Overview: Test the connection to the db using the test script and then schedule down time with the client and upgrade the actual server.
Run the test file on the production server selecting the end point for the new test database as the host and the current path. Verify the connection works for the test db and stop the timer started in step 2 to give you an idea of how long the upgrade process and test will take. Once the test was successful then set up a time to do the update with client and delete the test instance of the database. (For us we scheduled around 30 mins and it usually took around 10 for the whole process. We already had a method for showing a downtime message on the site while it would be down.)
During the time scheduled with the client put up the site down message and migrate it to production. In the AWS console click Modify and in the Network & Security section Click on the Certificate authority drop down and set it to the new certificate (In our case it was rds-ca-2019). Click Continue and select the radio button for the option to apply the changes now (Apply Immediately) and then run the modification. (This usually takes less than a minute.)
Use the test file to verify the connection using the end point for the production database as the host and the certificate path. Once testing is successful go into the db connect function and remove the commented section so the default connection to the server will use the new certificate. Migrate this change to production and verify the connection is still working and pulling data from the existing database using the certificate. Remove the site down message and migrate it to production, verify the site down message no longer displays, and you should be finished.
Setting up TestFlight on iPhone or iPad
TestFlight is an Apple app that allows users to test their app before launching on the app store. It is a great way to see progress made and make adjustments before the final product is released to the public. After you have downloaded the TestFlight app from the app store, entered your apple id, the video above describes the final steps in order to get synced with the invite from Apple App Store Connect in order to start receiving invites to test the latest app.
1. On your iPad, download Test Flight from the App Store.
2. Check your email for the Apple Developer site email. Follow link on the email, login and accept with your Apple ID (which you use on your iPad). Once accepted, I then can add you to the testing team. Please note you do not need to download any app except TestFlight.
3. Watch your email account for a new email, from Test Flight. Open the email, select ‘View in TestFlight’. This will launch the TestFlight app where you can download the latest version of the app.
The right app partner
The right app partner can change everything.
We have built many of the apps that your employees already know and love to use. With mobile and cloud apps it’s easy to integrate into your existing workflow, which means there’s no need to change how you already work to unlock efficiency.
Meet the worker bot.
An app can empower employees to produce more, faster, by offloading routine processes onto worker bots. A worker bot is a software process that handles chores for you.
A worker bot may also route information into disparate systems. Integrating your customer relationship management software with your Quickbooks installation has never been easier.
Now with worker bots and smart integrations, employees can stay productive and also monitor the small things that lead to huge savings.
When employees are unencumbered by simple tasks or data collection, they’re not just happier, they’re more engaged and more productive.
New ways to make an impact at work.
Today’s business world has never been more mobile. So we create apps that give employees everything they need to be productive, wherever they are. Your app may be customized to fit the precise needs of your business, or perhaps it is extended to fire an alert when some process falls out of variance.
Improving customer service may mean a more timely response, or having access to the manufacturing or shipping data from your mobile. Matraex integrates systems to make your sales and service teams more nimble.
The world’s experts are also our partners.
To help give your employees and customers the best app experience, we’ve engaged some of the world’s leading technology companies. Whether you are looking for a cloud hosting platform, backend system integration specialists, or mobile network services, you’ll have access to experts around the world you can work with and learn from.
At Matraex, our core is developing smart solutions for business. We build core infrastructure that creates recognizable returns. Is it time for us to help you elevate your business?
Proftpd PassivePorts Requirements (or Not Working)
After an exhaustive research session attempting to enabled Passive FTP on a Proftpd server I found and am now documenting this issue.
PassivePorts is a directive in Proftpd.conf to configure proftpd to use a specific set of ports for Passive FTP – You would the allow these ports through your firewall to your server.
The documentation on the full configuration and reason that you would use Passive vs Active FTP, and how to set it up on your server and firewall are beyond the scope of this document but I a couple of links that might get you there are here.
- http://proftpd.org/docs/directives/linked/config_ref_PassivePorts.html
- https://ubuntuforums.org/showthread.php?t=39566
- http://matrafox.info/proftpd-passive-port-not-working.html
- http://slacksite.com/other/ftp.html
In my first attempts I was attempting to use the port range between 60000 and 65535, the firewall ports were forwarded, and things did not work
- PassivePorts 60000 65535
So I had to dig in and find the details of why not, I enabled debugging on filezilla and ran at the command line in order to try and see what was happening
- proftpd -n -d30
I found a post somewhere that explained how I could read the response to the PASV command,
- Entering Passive Mode (172,31,10,46,148,107)
These last two octets in the response are the port number that is to be used here is how you calculate it (148*256 +107)=37995. Even though I had the server setup to use PassivePorts 60000 – 65535 it was still attempting to use 37995. Once I figured out how to confirm which port was being sent, I realized that the issue was not a firewall or other problem, but rather something in the system.
I happened across a Slacksite article which helped me find this in the Proftpd Document
PassivePorts restricts the range of ports from which the server will select when sent the PASV command from a client. The server will randomly choose a number from within the specified range until an open port is found. Should no open ports be found within the given range, the server will default to a normal kernel-assigned port, and a message logged.
In my research I was unable to find a message logged so I dont believe that a message shows anywhere, however this article helped me realize that there may be some issue on my system which was preventing ports 60000 to 65535 to be available and I started playing with the system
- 60000-61000 and 59000-60000 had no effect the system was still assigning ports within the 30000 to 40000 range.
- 50000 to 51000 had the same effect
So I tried some entries within the 30000 and 40000 and I found I could consistently control the ports if I used any range between 30000 and 40000
- PassivePorts 30000 32000 – gave me 31456, 31245, 30511, etc
- PassivePorts 32000 34000 – gave me 33098, 32734, 33516, etc
- etc
From this I figured out that I can only control the ports on this system in a range lower than the ones I was originally attempting
I did more research and found that there is a sysctl variable that shows the local anonymous port range
- sysctl -a|grep ip_local_port_range
On my system for some reason this was set to
- net.ipv4.ip_local_port_range = 32768 48000
I attempted setting this to a higher number
- sysctl -w net.ipv4.ip_local_port_range=”32768 65535″
However this did not change the way the proftpd allocated the ports only the lower range was available. Perhaps I could have set the variabl in sysctl.conf and restarted, but I stopped my investigation here. Instead I changed the firewall rules to allow port 32000 to 34000 through and I stuck with the configuration
- PassivePorts 32000 34000
What I learned from this was:
PassivePorts only suggests that your system use range of ports you specify, If that range is not available the system quietly selects a port outside the range you specified, If you have problems with your FTP hanging at MLSD check your logs to verify which PORT has been assigned. using the calculation (5th octet *256 + 6th octet).
Load problems after disk replacement on a ocfs2 and drbd system.
Notes Blurb on investigating a complex issue. resolved, however not with a concise description, notes kept in order to continue the issue in the case it happens again.
Recently, we had a disk failure on one of two SAN servers utilizing MD, OCFS2 and drbd to keep two servers synchronized.
We will call the two Systems: A and B
The disk was replaced on System A, which required a reboot in order for the system to recognize the new disk, then we ad to –re-add the disk to the MD. Once this happened, the disk started to rebuild. The OCFS and drbd layers did not seem to have any issue rebuilding quickly as soon as the servers rebuilt, the layers of redundancy made it fairly painless. However, the load on System B went up to 2.0+ and on System A up to 7.0+!
This slowed down System B significantly and made System A completely unusable.
I took a look at the many different tools to try to debug this.
- top
- iostat -x 1
- iotop
- lsof
- atop
The dynamics of how we use the redundant sans should be taken into should be taken into account here.
We mount System B to an application server via NFS, and reads and writes are done to System B, this makes it odd that System A is having such a hard time keeping up, it honly has to handle the DRBD and OCFS2 communication in order to keep synced (System A is handling reads and writes, where System B is only having to handle writes on the DRBD layer when changes are made. iotop shows this between 5 and 40 K/s, which seemed minimal.
Nothing is pointing to any kind of a direct indicator of what is causing the 7+ load on System A. the top two processes seem to be drbd_r_r0 and o2hb-XXXXXX, which take up minimal amounts of read and write
The command to run on a disk to see what is happening is
#iotop -oa
This command shows you only the commands that have used some amount of disk reas or write (-o) and it shows them cumulatively (-a) so you can easily see what is using the io on the system. From this I figured out that a majority of the write on the system, was going to the system drive.
What I found from this, is that the iotop, tool does not show the activity that is occuring at the drbd / ocfs2 level. I was able to see that on System B, where the NFS drive was connected to, that the nfsd command was writing MULTIPLE MB of information when I would write to the nfsdrive (cat /dev/zero> tmpfile), but I would see only 100K or something written to drbd on System B, and nothing on SystemA, however I would be able to see the file on System A,
I looked at the cpuload on Sysetm A when running the huge write, and it increased by about 1 (from 7+ to 8+) so it was doing some work , iotop just did not monitor it.
So i looked to iostat to find out if i would allow me to see the writes to the actual devices in the MD.
I ran
#iostat -x 5
So I could see what was being written to the devices, here is could see that the disk utilization on System A and System B was similar (about 10% per drive in the MD Array) and the await time on System B was a bit higher than System A. When I did this test I caused the load to go up on all servers to about 7 (application server, System A and System B) Stopping the write made the load time on the application server, and on System B go back down.
While this did not give me the cause, it helped me to see that disk writes on System A are trackable through iostat, and since no writes are occurring when I run iostat -x 5 I have to assume that there is some sort of other overhead that is causing the huge load time. With nothing else I felt I could test, I just rebooted the Server A.
Low and behold, the load dropped, writing huge files, deleting huge files was no longer an issue. The only think I could think was that there was a large amount of traffic of something which was being transferred back and forth to some ‘zombie’ server or something. (I had attempted to restart ocfs2 and drbd and the system wouldn’t allow that either which seems like it indicates a problem with some process being held open by a zombie process)
In the end, this is the best scenario I can use to describe the problem. While this is not real resolution. I publish this so that when an issue comes up with this in the future, we will be able to investigate about three different possibilities in order to get closer to figuring out the true issue.
- Investigate the network traffic (using ntop for traffic, tcpdump for contents, and eth for total stats and possible errors)
- Disconnect / Reconnect the drbd and ocfs2 pair to stop the synchronization and watch the load balance to see if that is related to the issue.
- Attempt to start and stop the drbd and ocfs2 processes and debug any problems with that process. (watch the traffic or other errors related to those processes)
Resolving net::ERR_INCOMPLETE_CHUNKED_ENCODING in Chrome
We have a page where there are 10 categories displaying like accordions so when you click on them they open and display the inputs for each category and allow them to be edited and saved. We had a button to Edit All of the accordions which would call the onclick function for each of the Categories/Accordions and open them all at once. When clicked individually there was no issue but when the Edit All button was clicked the page would lock up and become unresponsive and we would get a blank white space from half the page down. If you waited long enough you would get the net::ERR_INCOMPLETE_CHUNKED_ENCODING error when watching it in the console.
When googling for solutions to this issue many mentioned a newer feature in Chrome labeled: Prefetch resources to load pages more quickly. Since requiring users to turn off this option was not a valid solution I looked into setting a meta tag in the header to turn this option off. This did not resolve the issue. Some possible solutions also mentioned turning off a setting for some Anti Virus programs but this was also not a valid solution since we could not require it from end users. I also discounted setting the content-length for the headers since at this point the headers had already been sent.
Another possible cause for the issue was that the server might not be sending the terminal 0-length chunk. I found one area within the code that had an ob_start() without an end. Tried adding several flavors of ob_end and even ob_flush and this did not resolve the issue.
Next I looked into determining how many of the accordions could be clicked at the same time before the error was caused. It turned out that 9 could be clicked without the error and it was not any specific accordion causing the error. I also determined that the process of opening them started slowing down around 6-7 at the same time. I wrote a loop calling a function using setTimeout that would set a class groupsopening on a subset of the accordions needed to be opened that did not already have that class and just open them. Then I would bump up the setTimeout time and run the function again and loop until all of the accordions were open. I ran into issues with this method where the accordions would all still try and open at the same time. I believe the issue was related to calling setTimeout with the same function but I’m not completely sure. I had that feeling based on some research into the setTimeout function.
Next I tried using the setInterval method and setup a function to be called every 200ms. I added a global variable to keep track of how many times the function had been called and cleared the interval if it was over 50 because I determined if it ran this many times it would be in an infinite loop. Next I added a check for tables inside each of the accordions (which would only be added after the accordion was open) with the new groupsopening class and returned if some still needed to be opened. Then added code to pull accordions without the new class. If there were no accordions without the new class then we were done and I cleared the interval. If there were still some accordions without the class I marked another subset with the new class and called the click function to open them. This resolved the issues and prevented the error.