Posts Tagged storwize V7000

Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume

If a Storwize V7000 or SAN Volume Controller FlashCopy mapping is started on Volumes greater than 2TB in size, this will result in the FlashCopy operation writing incorrect data to the target Volume. The target Volume will therefore not contain the same data as the source Volume.

Content

An issue has been discovered that will result in incorrect data being written to FlashCopy targetVolumes greater than 2TB in size.

Data on the source Volume will be unaffected by this issue. Any FlashCopy replicated Volumes less than 2TB in size will also be unaffected by this issue.

Customers are strongly advised not to perform any FlashCopy operations on Volumes greater than 2TB in size, until a fix is available and has been applied.

Customers should also be aware that any previously created FlashCopy target Volumes greater than 2TB in size will have incorrect data on them, and should be treated as inconsistent with the original source data.

Fix

This issue will be fixed in the upcoming 6.1.0.9 PTF release, and also in the next major release, 6.2.0.x.

 

Cross Reference information

Segment Product Component Platform Version
Storage Virtualization SAN Volume Controller 6.1 SAN Volume Controller 6.1
Storage Virtualization SAN Volume Controller V5.1.x SAN Volume Controller 5.1
Storage Virtualization SAN Volume Controller V4.3.x SAN Volume Controller 4.3.0, 4.3.1

 


Source: https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003840&myns=s028

 

, , ,

Leave a comment

IBM disk systems- Shutting down a cluster node server in the administrative interface does not work

When you shut down a cluster node server in the administrative interface, the final status of the server is listed as “Running Maintenance Mode” instead of the expected status of “Stopped”.

Symptom

After you shut down the cluster node server in the administrative interface, the status “Shutting Down” is displayed for a while and then changes to “Running Maintenance Mode” instead of “Stopped”. You can still reach the cluster node server by using ping and SSH.

Cause

The administrative interface uses the power control adapter of the cluster node server to initiate a clean shutdown of the server. If the power control adapter is not working correctly, or if the operating system on the node hangs during shutdown, the server might stay in a running state with a part of the processes stopped. The administrative interface detects this situation as a state similar to “Running Maintenance Mode”.

 

Resolving the problem

Complete the following steps to resolve the problem:

  1. Force a shutdown of the cluster node server. This method works if the operating system is preventing the server from shutting down correctly. Complete the following steps:
    1. Log on to the management console server with the iaadmin ID and password.
    2. Run the following command, where ianodeX is the name of the cluster node server:
      ia_powercontrol -d ianodeX -N
  2. To check if the cluster node server is shut down, run the following command as iaadmin on the management console server, where ianodeX is the name of the server:
    ia_powercontrol -s ianodeX
    The server might take a few minutes to shut down. The command returns the following message after a successful shutdown:
    Node attached to power control hardware at 'ianodeX' is not powered on.
  3. If the ia_powercontrol command returns error messages, the power control adapter might not be working correctly. Complete the following steps to reset the power control adapter:
    1. Log on to the power control adapter, by running the following command where ianodeXis the name of the cluster node server: telnet pwrctl-ianodeX
    2. At the login: prompt, enter the following ID in upper-case characters: USERID
    3. At the Password: prompt, enter the following password in upper-case characters, using a zero instead of an “O” character: PASSW0RD
    4. At the system> prompt, run the following command: resetsp
    5. Log off from the power control adapter, by running the following command: exit
    6. Wait for 10 minutes so that the power control adapter can finish booting.
    7. Run the ia_powercontrol command again to force a shutdown.
  4. If you cannot shut down the cluster node server by using the power control adapater, log onto the server as root and run the following command: shutdown -h now
    Note: If the cluster node server is running in ‘Enhanced Tamper Protection’ mode, you must request and install an Emergency Support Access (ESA) patch to set up root access to the server.
  5. If you cannot shut down the cluster node server by using any of the previous steps, press the server power button for several seconds to stop the server.

https://www-304.ibm.com/support/docview.wss?mynp=OCSSS9C9&mync=E&uid=swg21499443&myns=s028

 

, , ,

Leave a comment

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data

 

Storwize V7000 units running code levels between V6.1.0.0 and V6.1.0.6 are exposed to an issue that can result in both node canisters shutting down simultaneously and unexpectedly during normal operation.

 

Content

An issue exists in the V6.1.0.0 – V6.1.0.6 code that can result in both node canisters abruptly shutting down during normal operation, resulting in a loss of hardened configuration metadata on these nodes and the requirement to perform a manual cluster recovery process to restore the configuration. This recovery process may take up to several hours to complete.

 

Additionally, any host I/O data that was resident in the fast-write cache at the time of failure will be unrecoverable.

 

 

Fix

A workaround was introduced in the V6.1.0.7 PTF release which, although not eliminating this issue, prevents the shutdown event on one node canister from propagating to the other node canister. This is intended to prevent a double-node shutdown from occurring, resulting in any loss of host access to data, and avoiding the need for a cluster recovery process to be performed. This workaround was further improved in the V6.1.0.8 PTF release to ensure that affected nodes will recover automatically.

 

If a single node shutdown event does occur when running V6.1.0.8, this node will automatically recover and resume normal operation without requiring any manual intervention.

 

IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V6.1.0.8 to avoid an outage.

 

Please visit the following URL to download the V6.1.0.8 code:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000966

 

https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003805&myns=s028


, , , ,

Leave a comment

Storwize V7000 new features been released

Storwize V7000 was introduced in October 2010 in IBM Storage Mid-range portfolio. V7000 archived one of the fastest product ramps in IBM history with more than 1.800 systems sold to more than 1.000 worldwide customers since general availability in November 2010. The products supports 10 GbE ports in new control enclosure models and this can increase ISCSI thoughput by up to 700 per cent. High performance 2.5-inch 146 GB, 15.000 rpm SAS drive is available and provides up to 30 per cent faster thoughput. Original disk drives choice is 2.5-inch 10.000 rpm SAS drives comes with 300, 450, 600 and 300 GB E-MLC SSD (enterprise-grade multilevel cell). Also 3.5-inch 2 TB 7.2K rpm Near-Line SAS disk is available. Two V7000 control enclosures (2U rack-mountable chasses) have Eight 8 Gbps FC host ports, Four 1 Gbps and optionally Four 10 Gbps ISCSI hosts ports. Per control enclosure 16 GB cache memory.

 

Two V7000 control enclosures can now be clustered to another V7000 control enclosure with IBM Storwize V7000 software v6.2, meaning doubling the capacity of a single managed V7000 up to 480 TB. The software has had built-in real time performance monitoring fuctionality added and the FlashCopy function can be used with Remote Mirror volumes, adding more choices to high-availability scenarios. This is similar to high-end storage products like DS8000.

 

Vmware vStorage API for Array Integration is now supported, meaning the array can now do storage work offloaded from the ESX server, enabling more VMs to be hosted and run. This is one of the most important features in this release. (v6.2)

 

When a customer is buying new V7000, IBM is offering data migration features for 60 days free. Which is enough for a company can migrate all the data from another storage device to V7000 himself and/or any storage devices behind V7000 as an external storage virtualization. Storwize V7000 can be upgraded from the smallest to the largest configuration without disruption. Existing V7000s can participate in clusters via a non-disruptive software upgrade to v6.2 and a cluster is managed as a single system. Once clustering is enabled then expansion enclosures can be added to scale capacity and/or a second control enclosure can be added to boost performance.

 

There is a Storwize V7000 plug-in for Vmware Vcenter wihch also supports virtualised external disk systems. The list of supported external systems now includes EMCs VNX, HDSs VSP, HP P9500 plus Texas  Memory Systems RamSan-620. Lastly existing model 112 and 124 control enclosures can be upgraded to add 10 GbE support. ISCSI is getting important and more important because of agility and cost approaches.

 

As a reminder, i want to marked that usually everybody want 15K rpm HDD but 10K rpm HDD is better if you have a change to balance the performance impact. So V7000 can provide a better performance for 10K rpm HDD with SSD and/or 146 GB 15K rpm HDD. Result is with the right approach and calculation, you can get better performance compare to native 15K rpm HDD.

 

IBM has not yet added compression to the V7000 and it is expected at some future date. It said in September 2010 that, within 12-18 moths, we didsee the RACE integration into IBMs block storage products; we may some time to wait.

 

Most of the new V7000 functionality will be available in June this year . There is no extra feature to order and no extra charge for clustering. The Vcenter plug-in will be available at no charge on 30 June (for v6.1 software) and 31 July (v6.2 software).

 

http://www-03.ibm.com/systems/storage/disk/storwize_v7000/index.html

 

, , ,

1 Comment

Storwize V7000 Node Canisters May Shut Down or Reboot Unexpectedly During Normal Operation

Storwize V7000 node canisters may shut down or reboot during normal operation, leading to a loss of host I/O access.

Description:

Stowize V7000 node canisters running V6.1.0.0 – V6.1.0.4 code levels may shut down without warning during normal I/O operations.

These shut down events will typically occur on both node canisters in the Storwize V7000 system, with the second node canister shutting down a number of hours after the first. Once the second node canister has shut down, this will cause a loss of host access to disks presented by the Storwize V7000, until at least one of the node canisters has been manually brought back online.

Workaround:

If this issue is encountered on V6.1.0.0 – V6.1.0.4, the recovery action is to reseat each offline node canister in order to bring it back online.

Partial Fix Introduced in V6.1.0.5

A partial fix was introduced in V6.1.0.5, which caused node canisters that experienced this condition to reboot and automatically resume I/O operations, rather than shut down and remain offline. Customers running V6.1.0.5 code are however still exposed to the risk of both node canisters rebooting at the same time, which could lead to a short, temporary outage to host I/O.

Complete Fix:

This issue has been fully resolved by APAR IC74088 in the V6.1.0.6 release. Please visit the following URL to download the latest V6.1.0.x code:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003748&myns=s028&mynp=OCST3FR7&mync=E

, , ,

Leave a comment

IBM Storwize V7000 6.1.0 Configuration Limits and Restrictions

Storwize V7000 software versions 6.1.0.0 to 6.1.0.6 support attachment of up to 4 expansion enclosures per system. Software version 6.1.0.7 and later removes this restriction, supporting attachment of up to 9 expansion enclosures, allowing a total of 10 enclosures per system.

 

DS4000 Maintenance

Storwize V7000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications “Recovery Guru” that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.

 

Host Limitations

Windows SAN Boot Clusters (MSCS):

It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:

  • On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.

These restrictions and more are described in the Microsoft White Paper: “Microsoft Windows Clustering: Storage Area Networks“.

We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).

Oracle:

 

  • Restriction 1: ASM cannot recognise the size change of the disk when Storwize V7000 disk is resized unless the disk is removed from ASM and included again.
  • Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.
  • Restriction 3: For RHEL4 set Oracle Clusterware ‘misscount’ parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90

 

Maximum Configurations

Configuration limits for Storwize V7000 software version 6.1.0:

 

, ,

Leave a comment