Category: Linux
Setting DRBD in Primary / Primary — common commands to sync resync and make changes
Setting DRBD in Primary / Primary — common commands to sync resync and make changes
As we have been setting up our farm with an NFS share the DRBD primary / primary connection between servers is important.
We are setting up a group of /customcommands/ that we will be able to run to help us keep track of all of the common status and maintenance commands we use, but when we have to create, make changes to the structure, sync and resync, recover, grow or move the servers, We need to document our ‘Best Practices’ and how we can recover.
From base Server install
apt-get install gcc make flex
wget http://oss.linbit.com/drbd/8.4/drbd-8.4.1.tar.gz
tar xvfz drbd-8.4.1.tar.gz
cd drbd-8.4.1/
./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km
make KDIR=/lib/modules/3.2.0-58-virtual/build
make install
Setup in/etc/drbd.d/disk.res
resource r0 {
protocol C;
syncer { rate 1000M; }
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
#requires a clustered filesystem ocfs2 for 2 prmaries, mounted simultaneously
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
cram-hmac-alg sha1;
shared-secret "sharedsanconfigsecret";
}
on server1{
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.10:7788;
meta-disk internal;
}
on riofarm-base-san2 {
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.11:7788;
meta-disk internal;
}
}
Setup your /etc/hosts
192.168.100.10 server1
192.168.100.11 server2
Setup /etc/hostname with
server1
reboot, verify your settings and SAVE A DRBDVMTEMPLATE clone your VM to a new server called server2
Setup /etc/hostname with
server2
start drbd with /etc/init.d/drbd this will likely try and create the connection, but this is where we are going to ‘play’ to learn the commands and how we can sync, etc.
cat /proc/drbd #shows the status of the connections server1> drbdadm down r0 #turns of the drbdresource and connection server2> drbdadm down r0 #turns of the drbd resource and connection server1> drbdadm -- --force create-md r0 #creates a new set of meta data on the drive, which 'erases drbds memory of the sync status in the past server2> drbdadm -- --force create-md r0 #creates a new set of meta data on the drive, which 'erases drbds memory of the sync status in the past server1> drbdadm up r0 #turns on the drbdresource and connection and they shoudl connect without a problem, with no memory of a past sync history server2> drbdadm up r0 #turns on the drbdresource and connection and they shoudl connect without a problem, with no memory of a past sync history server1> drbdadm -- --clear-bitmap new-current-uuid r0 # this create a new 'disk sync image' essentially telling drbd that the servers are blank so no sync needs to be done both servers are immediately UpToDate/UptoDate in /proc/drbd server1> drbdadm primary r0 server2> drbdadm primary r0 #make both servers primary and now when you put an a filesystem on /dev/drbd0 you will be able to read and write on both systems as though they are local
So, lets do some failure scenarios, Say, we loose a server, it doesn’t matter which one since they are both primaries, in this case though we will say server2 failed. Create a new VM from DRBDVMTEMPLATE which already had drbd made on it with the configuration or create another one using the instructions above.
Open /etc/hostname and set it to
server2
reboot. Make sure /etc/init.d/drbd start is running
server1>watch cat /proc/drbd #watch the status of dtbd, it is very useful and telling about what is happening, you will want DRBD to be Connected Primary/Unknown UpToDate/DUnknown server2>drbdadm down server2>dbadm wipe-md r0 #this is an optional step that is used to wipe out the meta data, I have not seen that it does anything different than creating the metadata using the command below, but it is useful to know the command in case you want to get rid of md on your disk server2>drbdadm -- --force create-md r0 ##this makes sure that their is no partial resync data left over from where you cloned it from server2>drbdadm up r0 # this brings drbd server2 back into the resource and connects them, it will immediately sart syncing you should see SyncSource Primary/Secondary UpToDate/Inconsistent on server1, for me it was soing to to 22 hours for my test of a 1TM (10 MB / second)
Lets get funky, what happens if you stop everything in the middle of a sync
server1>drbdadm down r0 #we shut down the drdb resource that has the most up to date information, on server2 /proc/drbd shows Secondary/Unknown Inconsitent/DUnknown , server2 does not know about server1 any more, but server2 still knows that server2 is inconsitent, (insertable step here could be on server2: drbdadm down ro; drbdadm up ro, with no change to the effect) server1>drbdadm up ro # this brings server1 back on line and /proc/drbd on server1 shows SyncSource, server2 shows SyncTarget, server1 came backup as the UpToDate server, server2 was Inconsistent, it figured it out
Where things started to go wrong and become less ‘syncable’ was when servers were both down and had to be brought back up again separately with a new uuid was created on them separately. so lets simulate that the drbd config fell apart, and we have to put it together again.
server2>drbdadm disconnect ro; drdbadm -- --force create-md r0 ; drbd connect ro; #start the sync process over
awk Command to remove Non IP entries from /etc/hosts and /etc/hosts.deny
awk Command to remove Non IP entries from /etc/hosts and /etc/hosts.deny
We had a script automatically adding malicious IPS to our /etc/hosts.deny file on one of our servers.
The script went awry and ended up putting hundreds of thousands of non ip addresses into the file. There were malicious IP addresses mixed in
I used this awk script to clean it up , and remove all of the non ip addresses, and make the list unique.
awk '/ALL/ && $NF ~ /[0-9.]/' /etc/hosts.deny| sort -n -k2 |uniq > /etc/hosts.deny2
once I inspected the /etc/hosts.deny2 I replaced the original
mv /etc/hosts.deny2 /etc/hosts.deny
Mdadm – Failed disk recovery (unreadable disk)
Mdadm – Failed disk recovery (unreadable disk)
Well,
After 9 more months I ran into a nother disk failure. (First disk failure found here https://www.matraex.com/mdadm-failed-disk-recovery/)
But this time, The system was unable to read the disk at all
#fdisk /dev/sdb
This process just hung for a few minutes. It seems I couldn’t simply run a few commands like before to remove and add the disk back to the software RAID. So I had to replace the disk. Before I went to the datacenter I ran
#mdadm /dev/md0 --remove /dev/sdb1
I physically went to our data center, found the disk that showed the failure (it was disk sdb so I ‘assumed’ it was the center disk out of three, but I was able to verify since it was not blinking from normal disk activity. I removed the disk, swapped it out for one that I had sitting their waiting for this to happen, and replaced it. Then I ran a command to make sure the disk was correctly partitioned to be able to fit into the array
#fdisk /dev/sdb
This command did not hang, but responded with cannot read disk. Darn, looks like some error happened within the OS or on the backplane that made it so a newly added disk wasn’t readable. I scheduled a restart on the server later when the server came back up, fdisk could read the disk. It looks like I had used the disk for something before, but since I had put it in my spare disk pile, I knew I could delete it and I partitioned it with one partion to match what the md was expecting (same as the old disk)
#fdisk /dev/sdb
>d 2 -deletes the old partition 2
>d 1 -deletes the old partition 1
>c -creates a new partion
>p – sets the new partion as primary
>1 – sets the new partion as number 1
>> <ENTER> – just press enter to accept the defaults starting cylinder
>> <ENTER> – just press enter to accept the defaults ending cylinder
>> w – write the partion changes to disk
>> Ctrl +c – break out of fdisk
Now the partition is ready to add back to the raid array
#mdadm /dev/md0 –add /dev/sdb1
And we can immediately see the progress
#mdadm /dev/md0 --detail /dev/md0: Version : 00.90.03 Creation Time : Wed Jul 18 00:57:18 2007 Raid Level : raid5 Array Size : 140632704 (134.12 GiB 144.01 GB) Device Size : 70316352 (67.06 GiB 72.00 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Feb 22 10:32:01 2014 State : active, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 0% complete UUID : fe510f45:66fd464d:3035a68b:f79f8e5b Events : 0.537869 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 3 8 17 1 spare rebuilding /dev/sdb1 2 8 33 2 active sync /dev/sdc1
And then to see the progress of rebuilding
#cat /proc/mdadm Personalities : [raid1] [raid6] [raid5] [raid4] md0 : active raid5 sdb1[3] sda1[0] sdc1[2] 140632704 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U] [==============>......] recovery = 71.1% (50047872/70316352) finish=11.0min speed=30549K/sec md1 : active raid1 sda2[0] 1365440 blocks [2/1] [U_]
Wow in the time I have been blogging this, already 71 percent rebuilt!, but wait! what is this, md1 is failed? I check my monitor and what do I find but another message that shows that md1 failed with the reboot. I was so used to getting the notice saying md0 was down I did not notice that md1 did not come backup with the reboot! How can this be?
It turnd out that sdb was in use on both md1 and md0, but even through sdb could not be read at all on /dev/sdb and /dev/sdb1 failed out of the md0 array, somehow the raid subsystem had not noticed and degraded the md1 array even though the entire sdb disk was not respoding (perhaps sdb2 WAS responding back then just not sdb), who knows at this point. Maybe the errors on the old disk could have been corrected by the reboot if I had tried that before replacing the disk, but that doesn’t matter any more, All I know is that I have to repartion the sdb device in order to support both the md0 and md1 arrays.
I had to wait until sdb finished rebuilding, then remove it from md0, use fdisk to destroy the partitions, build new partitions matching sda and add the disk back to md0 and md1
XenCenter – live migrating a vm in a pool to another host
XenCenter – live migrating a vm in a pool to another host
When migrating a vm server from one host to another host in the pool I found it to be very easy at first.
In fact, it was one of the first things test I did after setting up my first vm on a host in a pool. 4 steps
- Simply right click on the vm in XenCenter ->
- Migrate to Server ->
- Select from your available servers.
- Follow the wizzard
In building some servers, I wanted to get some base templates which are ‘aware’ of the network I am putting together. This would involve adding some packages and configuration, taking a snapshot and then turning that snapshot into a template that I could easily restart next time I wanted a similar server. Then when I went to migrate one of the servers into its final resting place. I found an interesting error.
- Right click on the vm in XenCenter ->
- Migrate to Server ->
- All servers listed – Cannot see required storage
I found this odd since I was sure that the pool could see all of the required storage (In fact I was able to start a new VM on the storage available, so I new the storage was there)
I soon found out though that the issue is that the live migrate feature, just doesnt work when there is more than one snapshot. I will have to look into my snapshot management on how I want to do this now, but basically I found that by removing old snapshots does to where the VM only had one snapshot (I left one that was a couple of days old) I was able to follow the original 4 steps
Note: the way I found out about the limitation of the number of snapshots was by
- Eight click on the vm in XenCenter ->
- Migrate to Server ->
- The available servers are all grayed out, so Select “Migrate VM Wizard”
- In the wizard that comes up select the current pool for “Destination”
- This populates a list of VMs with Home Server in the destination pool want to migrate the VM (My understanding, is that this will move the VM to that server AND make that new server the “Home Server” for that VM)
- When you attempt to select from the drop down list under Home Server, you see a message “You attempted to migrate a VM with more than one snaphot”
Using that information I removed all but one snapshot and was able to migrate. I am sure there is some logical reason behind snapshot / migration limitation but for now I will work around it and come up with some other way to handle my snapshots than just leaving them under the snapshot tab of the server.
apt-get – NO_PUBKEY – how to add the pubkey
apt-get – NO_PUBKEY – how to add the pubkey
I have run into this situation many times on Ubuntu and Debian so I thought I would finally document the fix.
When run into a apt-get error where there NO_PUBKEY avaiable for a package you want to install you get this error
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY xxxxxxxxxxxxxxxxxxxxxx
This means your system does not trust the signature, so if you trust mit’s keyserver, you can do this to fix it
root@servername:~# gpg --keyserver pgp.mit.edu --recv-keys xxxxxxxxxxxxxxxxxxxxxx root@servername:~# gpg --armor --export xxxxxxxxxxxxxxxxxxxxxx| apt-key add -
Solves it for me every time so far, at some point though I might run into a situation where mit does not have the keys, for now though this works, and I trust them
Entire script below
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY xxxxxxxxxxxxxxxxxxxxxx root@servername:~# gpg --keyserver pgp.mit.edu --recv-keys xxxxxxxxxxxxxxxxxxxxxx root@servername:~# gpg --armor --export xxxxxxxxxxxxxxxxxxxxxx| apt-key add - OK
MDADM – Failed disk recovery (too many disk errors)
MDADM – Failed disk recovery (too many disk errors)
This only happens once every couple of years, but occasionally a SCSI disk on one of our servers has too many errors, and is kicked out of the md array
And… we have to rebuild it. Perhaps we should replace it since it appears to be having problems, but really, the I in RAID is inexpensive (or something) so I would rather lean to being frugal with the disks and replacing them only if required.
I can never remember of the top of my head the commands to recover, so this time I am going to blog it so I can easily find it.
First step, take a look at the status of the arrays on the disk
#cat /proc/mdstat
(I don't have a copy of what the failed drive looks like since I didn't start blogging until after)
Sometimes an infrequent disk error can cause md to fail a hard drive and remove it from an array, even though the disk is fine.
That is what happened in this case, and I knew the disk was at least partially good. The disk / partition that failed was /dev/sdb1 and was part of a RAID V, on that same device another partition is part of a RAID I, that RAID I is still healthy so I knew the disk is just fine. So I am only re-adding the disk to the array so it can rebuild. If the disk has a second problem in the next few months, I will go ahead and replace it, since the issue that happened tonight is probably indicating a disk that is beginning to fail but probably still has lots of life in it.
The simple process is
#mdadm /dev/md0 --remove /dev/sdb1
This removed the faulty disk, that is when you would physically replace the disk in the machine, since I am only going to rebuild the disk I just skip that and move to the next step.
#mdadm /dev/md0 --re-add
The disk started to reload and VOILA! we are rebuilding and will be back online in a few minutes.
Now you take a look at the status of the arrays
#cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md0 : active raid5 sdb1[3] sdc1[2] sda1[0] 140632704 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U] [=======>.............] recovery = 35.2% (24758528/70316352) finish=26.1min speed=29020K/sec md1 : active raid1 sda2[0] sdb2[1] 1365440 blocks [2/2] [UU]
In case you want to do any trouble shooting on what happened, this command is useful in looking into the logs.
#grep mdadm /var/log/syslog -A10 -B10
But this command is the one that I use to see the important events related to the failure and rebuild. As I am typing this I am just over 60% complete rebuilt which you see in the log
#grep mdadm /var/log/syslog Jun 15 21:02:02 xxxxxx mdadm: Fail event detected on md device /dev/md0, component device /dev/sdb1 Jun 15 22:03:16 xxxxxx mdadm: RebuildStarted event detected on md device /dev/md0 Jun 15 22:11:16 xxxxxx mdadm: Rebuild20 event detected on md device /dev/md0 Jun 15 22:19:16 xxxxxx mdadm: Rebuild40 event detected on md device /dev/md0 Jun 15 22:27:16 xxxxxx mdadm: Rebuild60 event detected on md device /dev/md0
You can see from the times, it took me just over an hour to respond and start the rebuild (I know, that seems too long if I were to just do this remotely, but when I got the notice, I went on site since I thought I would have to do a physical swap and I had to wait a bit while the Colo security verified my ID, and I was probably moving a little slow after some Nachos at Jalepeno’s) Once the rebuild started it took about 10 minutes per 20% of the disk to rebuild.
————————-
Update: 9 months later the disk finally gave out and I had to manually replace the disk. I blogged again:
https://www.matraex.com/mdadm-failed-d…nreadable-disk/
Linux System Discovery
Linux System Discovery
Over the last couple of weeks I have been working on doing some in depth “System Discovery” work for a client.
The client came to us after a major employee restructuring, during which they lost ALL of the technical knowledge of their network.
The potentially devastating business move on their part turned into a very intriguing challenge for me.
They asked me to come in and document what service each of their 3 Linux servers.
As I dug in I found that their network had some very unique, intelligent solutions:
- A reliable production network
- Thin Client Linux printing stations, remotely connected via VPN
- Several Object Oriented PHP based web applications
Several open source products had been combined to create robust solutions
It has been a very rewarding experience to document the systems and give ownership of the systems, network and processes back to the owner.
The documentation I have provided included
- A high level network diagram as a quick reference overview for new administrators and developers
- An overall application and major network, server and node object description
- Detailed per server/node description with connection documentation, critical processes , important paths and files and dependencies
- Contact Information for the people and companies that the systems rely on.
As a business owner myself, I have tried to help the client recognize that even when they use an outside consultant, it is VERY important that they maintain details of their critical business processes INSIDE of their company. Their might not be anything in business that is as rewarding as giving ownership of a “lost” system back to a client.
Matraex Upgraded Mail Client From Squirrelmail to Roundcube
Matraex Upgraded Mail Client From Squirrelmail to Roundcube
Matraex has officially upgraded our web based mail client from Squirrelmail to Roundcube.
Roundcube is a modern mail client utilizing newer technologies for faster and more feature rich mail interaction. Roundcube runs on our Linux webservers, utilizing Apache, PHP and MySQL. The software connects to the mail server using the IMAP protocol.
All address book contacts and preferences were imported to Roundcube from Squirellmail at the time of the transition.
As well as updating and implementing their own technologies, Matraex provides server administration, open source production implementation and software customizations to business as a service.
Users with questions about the new mail service or Matraex Consulting Services should contact:
Michael Blood
Matraex, Inc
208.344.1115
www.matraex.com
Network Boot Server with Linux Install, Debian Etch and Lenny, CentOS and KNOPPIX
Network Boot Server with Linux Install, Debian Etch and Lenny, CentOS and KNOPPIX
I just LOVE my dedicated PXE boot server at the office with several flavors of linux install on it.
I can bring a new server online with a base install in as few as five minutes with Debian or CentOS
I can debug workstations and servers with a quickbooting KNOPPIX install.
I even have some kernel installations customized to install network drivers for the Dell 2650 so that the installs I do for those are quick and simple. (basically the broadcom network drivers and the openssh-server packages are preseeded to be installed with the default package)
Here are the contents my pxelinux.cfg/default file:
DISPLAY boot.txt
#DEFAULT etch_i386_install
LABEL etch_i386_install
kernel debian/etch/i386/linux
append vga=normal initrd=debian/etch/i386/initrd.gz —
LABEL etch_i386_expert
kernel debian/etch/i386/linux
append priority=low vga=normal initrd=debian/etch/i386/initrd.gz —
LABEL etch_i386_rescue
kernel debian/etch/i386/linux
append vga=normal initrd=debian/etch/i386/initrd.gz rescue/enable=true —
LABEL knoppix
kernel knoppix/vmlinuz
append secure myconfig=scan nfsdir=192.168.0.1:/srv/diskless/knoppix nodhcp lang=us ramdisk_size=100000 init=/etc/init apm=p
ower-off nomce vga=791 initrd=knoppix/miniroot.gz quiet BOOT_IMAGE=knoppix
LABEL centos5_install
kernel centos/5/vmlinuz
append ks=nfs:192.168.0.1:/srv/diskless/centos/5/ks_prompt.cfg initrd=centos/5/initrd.img ramdisk_size=100000 ksdevice=eth0
ip=dhcp url –url http://mirror.centos.org/centos/5/os/i386/CentOS/
LABEL centos5_raid_install_noprompt
kernel centos/5/vmlinuz
append ks=nfs:192.168.0.1:/srv/diskless/centos/5/ks_raid.cfg initrd=centos/5/initrd.img ramdisk_size=100000 ksdevice=eth0 ip
=dhcp url –url http://mirror.centos.org/centos/5/os/i386/CentOS/
LABEL centos5_hda_install_noprompt
kernel centos/5/vmlinuz
append ks=nfs:192.168.0.1:/srv/diskless/centos/5/ks_hda.cfg initrd=centos/5/initrd.img ramdisk_size=100000 ksdevice=eth0 ip=
dhcp url –url http://mirror.centos.org/centos/5/os/i386/CentOS/
LABEL centos5_install_noprompt
kernel centos/5/vmlinuz
append ks=nfs:192.168.0.1:/srv/diskless/centos/5/ks.cfg initrd=centos/5/initrd.img ramdisk_size=100000 ksdevice=eth0 ip=dhcp
url –url http://mirror.centos.org/centos/5/os/i386/CentOS/[dfads params=’groups=221&limit=1′]
LABEL lenny_i386_install
kernel debian/lenny/i386/linux
append vga=normal initrd=debian/lenny/i386/initrd.gz —LABEL lenny_amd64_install
kernel debian/lenny/amd64/linux
append vga=normal initrd=debian/lenny/amd64/initrd.gz —LABEL etch_amd64_install
kernel debian/etch/amd64/linux
append vga=normal initrd=debian/etch/amd64/initrd.gz —LABEL etch_amd64_linux
kernel debian/etch/amd64/linux
append vga=normal initrd=debian/etch/amd64/initrd.gz —LABEL etch_amd64_expert
kernel debian/etch/amd64/linux
append priority=low vga=normal initrd=debian/etch/amd64/initrd.gz —LABEL etch_amd64_rescue
kernel debian/etch/amd64/linux
append vga=normal initrd=debian/etch/amd64/initrd.gz rescue/enable=true —LABEL etch_amd64_auto
kernel debian/etch/amd64/linux
append auto=true priority=critical vga=normal initrd=debian/etch/amd64/initrd.gz —PROMPT 1
Here are the contents of my boot.txt file (so that I know what to type at the command line when booting)
– Boot Menu –
=============etch_i386_install  –  Debian Stable
etch_i386_expert   –  Debian Stable (Shows install menu every step)
etch_i386_rescue   –  Debian Stable Rescue
lenny_i386_install — has Broadcom net card customization
lenny_amd64_install — has Broadcom net card customization
etch_amd64_install
etch_amd64_linux
etch_amd64_expert
etch_amd64_rescue
etch_amd64_auto
centos5_install –Â CentOS 5 (Will prompt for disks)
centos5_install_noprompt –Â CentOS 5 (Will auto install without prompts)
centos5_hda_install_noprompt –Â CentOS 5 (Will auto install without prompts)
centos5_raid_install_noprompt –Â CentOS 5 (Will auto install on raid 1 without prompts)
knoppix
Hope someone out there can find some use from this.
We of course can help people having trouble with their own TFTP and PXE Boot Server .
Installed PERC management software afaapps and created simple mirror
Installed PERC management software afaapps and created simple mirror
I just installed Debian Lenny on a Dell 2650 with an OLD PERC 3 RAID controller.
I then installed the afaapps package from Dell’s website (http://support.us.dell.com/support/downloads/download.aspx?c=us&l=en&s=gen&releaseid=R85529&formatcnt=1&libid=0&fileid=112003)
Use this link or just search for ‘afaapps’ under the Drivers and Downloads section of the Dell support site.
After extracting the rpm from the downloaded file I ran alien against the file to turn it into a debian file
#apt-get install alien
#alien -d –scripts afaapps-2.8-0.i386.rpm
Now just install the created debian package
#dpkg -i afaapps_2.8-1_i386.deb
Now that you have installed the afacli you can run it at the command line prompt which will open the PERC command line “FASTCMD>”
Then you’ll open / connect to the RAID controller using “open afa0”
#afacli
FASTCMD> open afa0
Executing: open “afa0”
A simple ‘disk list’ command to find out what your disk situation looks like
AFA0> disk list
Executing: disk list
B:ID:L Device Type Blocks Bytes/Block Usage Shared
—— ————– ——— ———– —————- ——
0:00:0 Disk 35566478Â 512 Initialized NO
0:01:0 Disk 287132440 512 Initialized NO
0:02:0 Disk 287132440 512 Initialized NO
you may have to initialize your disks by typeing ‘disk initialize 1’ and ‘disk initialize 2’ to make sure that the container can access them, you can see in my example above that my two disks are already initialized.
Now I will create a volume on disk 1 and mirror that disk to disk 2
AFA0> container create volume 1
AFA0> container create mirror 1 2
At the bottom of your screen you should see the status of the mirroring Job, something like.
Stat:OK!, Task:100, Func:MSC Ctr:1, State:RUN 16.2%
Once the job completes you can partition and format the disk. Check the label on the disk by running:
AFA0> container list
Executing: container list
Num Total Oth Chunk Scsi Partition
Label Type Size Ctr Size Usage B:ID:L Offset:Size
—– —— —— — —— ——- —— ————-
0 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx1 Mirror 136GB Valid 0:01:0 64.0KB: 136GB
/dev/sdb 0:04:0 64.0KB: 136GB
From this I can see that I will need to partition and format disk “/dev/sdb”
Have fun! And if I can help you on it let me know.