Posts Tagged storage virtualization

Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume

If a Storwize V7000 or SAN Volume Controller FlashCopy mapping is started on Volumes greater than 2TB in size, this will result in the FlashCopy operation writing incorrect data to the target Volume. The target Volume will therefore not contain the same data as the source Volume.

Content

An issue has been discovered that will result in incorrect data being written to FlashCopy targetVolumes greater than 2TB in size.

Data on the source Volume will be unaffected by this issue. Any FlashCopy replicated Volumes less than 2TB in size will also be unaffected by this issue.

Customers are strongly advised not to perform any FlashCopy operations on Volumes greater than 2TB in size, until a fix is available and has been applied.

Customers should also be aware that any previously created FlashCopy target Volumes greater than 2TB in size will have incorrect data on them, and should be treated as inconsistent with the original source data.

Fix

This issue will be fixed in the upcoming 6.1.0.9 PTF release, and also in the next major release, 6.2.0.x.

 

Cross Reference information

Segment Product Component Platform Version
Storage Virtualization SAN Volume Controller 6.1 SAN Volume Controller 6.1
Storage Virtualization SAN Volume Controller V5.1.x SAN Volume Controller 5.1
Storage Virtualization SAN Volume Controller V4.3.x SAN Volume Controller 4.3.0, 4.3.1

 


Source: https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003840&myns=s028

 

Advertisements

, , ,

Leave a comment

IBM XIV Host Attachment Kit v.1.6 for AIX,SLES,HP-UX,RHEL,Windows,Solaris released

IBM XIV Host Attachment Kit for AIX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for AIX is a software pack that simplifies the task of connecting an IBM AIX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native AIX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for SLES, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for SLES is a software pack that simplifies the task of connecting an SLES host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native SLES multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for HP-UX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for HP-UX is a software pack that simplifies the task of connecting an HP-UX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native HP-UX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for RHEL, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for RHEL is a software pack that simplifies the task of connecting a RHEL host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native RHEL multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Windows, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Windows is a software pack that simplifies the task of connecting a Microsoft Windows Server host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Windows Server multipath configuration. To further help the administrator, the HAK also installs required Windows Server hotfixes, and checks the compatibility of the installed HBA drivers as it was known when this HAK was released. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Solaris, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Solaris is a software pack that simplifies the task of connecting an Solaris host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Solaris multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

 

, , ,

Leave a comment

IBM disk systems- Shutting down a cluster node server in the administrative interface does not work

When you shut down a cluster node server in the administrative interface, the final status of the server is listed as “Running Maintenance Mode” instead of the expected status of “Stopped”.

Symptom

After you shut down the cluster node server in the administrative interface, the status “Shutting Down” is displayed for a while and then changes to “Running Maintenance Mode” instead of “Stopped”. You can still reach the cluster node server by using ping and SSH.

Cause

The administrative interface uses the power control adapter of the cluster node server to initiate a clean shutdown of the server. If the power control adapter is not working correctly, or if the operating system on the node hangs during shutdown, the server might stay in a running state with a part of the processes stopped. The administrative interface detects this situation as a state similar to “Running Maintenance Mode”.

 

Resolving the problem

Complete the following steps to resolve the problem:

  1. Force a shutdown of the cluster node server. This method works if the operating system is preventing the server from shutting down correctly. Complete the following steps:
    1. Log on to the management console server with the iaadmin ID and password.
    2. Run the following command, where ianodeX is the name of the cluster node server:
      ia_powercontrol -d ianodeX -N
  2. To check if the cluster node server is shut down, run the following command as iaadmin on the management console server, where ianodeX is the name of the server:
    ia_powercontrol -s ianodeX
    The server might take a few minutes to shut down. The command returns the following message after a successful shutdown:
    Node attached to power control hardware at 'ianodeX' is not powered on.
  3. If the ia_powercontrol command returns error messages, the power control adapter might not be working correctly. Complete the following steps to reset the power control adapter:
    1. Log on to the power control adapter, by running the following command where ianodeXis the name of the cluster node server: telnet pwrctl-ianodeX
    2. At the login: prompt, enter the following ID in upper-case characters: USERID
    3. At the Password: prompt, enter the following password in upper-case characters, using a zero instead of an “O” character: PASSW0RD
    4. At the system> prompt, run the following command: resetsp
    5. Log off from the power control adapter, by running the following command: exit
    6. Wait for 10 minutes so that the power control adapter can finish booting.
    7. Run the ia_powercontrol command again to force a shutdown.
  4. If you cannot shut down the cluster node server by using the power control adapater, log onto the server as root and run the following command: shutdown -h now
    Note: If the cluster node server is running in ‘Enhanced Tamper Protection’ mode, you must request and install an Emergency Support Access (ESA) patch to set up root access to the server.
  5. If you cannot shut down the cluster node server by using any of the previous steps, press the server power button for several seconds to stop the server.

https://www-304.ibm.com/support/docview.wss?mynp=OCSSS9C9&mync=E&uid=swg21499443&myns=s028

 

, , ,

Leave a comment

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data

 

Storwize V7000 units running code levels between V6.1.0.0 and V6.1.0.6 are exposed to an issue that can result in both node canisters shutting down simultaneously and unexpectedly during normal operation.

 

Content

An issue exists in the V6.1.0.0 – V6.1.0.6 code that can result in both node canisters abruptly shutting down during normal operation, resulting in a loss of hardened configuration metadata on these nodes and the requirement to perform a manual cluster recovery process to restore the configuration. This recovery process may take up to several hours to complete.

 

Additionally, any host I/O data that was resident in the fast-write cache at the time of failure will be unrecoverable.

 

 

Fix

A workaround was introduced in the V6.1.0.7 PTF release which, although not eliminating this issue, prevents the shutdown event on one node canister from propagating to the other node canister. This is intended to prevent a double-node shutdown from occurring, resulting in any loss of host access to data, and avoiding the need for a cluster recovery process to be performed. This workaround was further improved in the V6.1.0.8 PTF release to ensure that affected nodes will recover automatically.

 

If a single node shutdown event does occur when running V6.1.0.8, this node will automatically recover and resume normal operation without requiring any manual intervention.

 

IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V6.1.0.8 to avoid an outage.

 

Please visit the following URL to download the V6.1.0.8 code:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000966

 

https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003805&myns=s028


, , , ,

Leave a comment

Potential Problem on XIV Storage System microcode versions 10.2.2 thru 10.2.4.a

Potential Problem on XIV Storage System ranging microcode versions from 10.2.2 thru 10.2.4.a that can be caused by changing system time via Network Time Protocol (NTP) or when changing the clock via XCLI

When the system time is changed to more than ~500 years ahead in the future, the Manager Node will get stuck. It will stop handling xcli operations, but more severely – it will not be able to detect any failure in the system. Once such a failure occurs, all hosts will lose access to the system and IBM support needs to be contacted immediately.

In case of a Manager Node impact, the system will continue to serve I/O’s but in case of a subsequent data component failure (either a disk or a module) the system might not properly identify the failure and therefore will not initiate a rebuild process and possibly cause hosts to lose access to data. Other symptoms may include, but not limited to: the XIV will not be able to properly detect loss of input AC (building power outage) and therefore shutdown while ensuring all writes are committed to disk; the customer may not be able perform any operations on the XIV, including GUI updates or inquiries.

If the machine is reporting to the XIV service center then XIV will receive the proper events that will notify us regarding this issue and we will be able to contact the customer and verify/fix this state.

 

Environment

Affected versions Environment

  • 10.2.2
  • 10.2.2.a
  • 10.2.4
  • 10.2.4.a

Resolving the problem

Mitigation

  • Remove NTP server configuration from the XIV to avoid getting into this situation.
  • Do not perform a manual change of the machine time to a date 500 years or more in the future (this would only happen by an error).

Fix
Fix is included in version 10.2.4.b version that is planned to be released in Q2 2011.

  • 10.2.4.b version will disallow setting of invalid dates
  • Year must be between 2000 and 2030,
  • 10.2.4.b will have error handling messages and more debug information to better manage this situation.
  • NEW_TIME_CHANGE_IS_INAVLID
    This event will be raised when an attempt to set the time is blocked because time is invalid. (year not between 2000 to 2030)
  • SETTING_NEW_TIME
    This event is raised every time an attempt to set a ‘complete’ time, not a delta from the last time setting. (when delta >TIME_UPDATE_MINIMUM_DIFF)
  • Both events are limited to one in an hour.

 

https://www-304.ibm.com/support/docview.wss?mynp=OCSTJTAG&mync=E&uid=ssg1S1003838&myns=s028

, , ,

Leave a comment

Storwize V7000 new features been released

Storwize V7000 was introduced in October 2010 in IBM Storage Mid-range portfolio. V7000 archived one of the fastest product ramps in IBM history with more than 1.800 systems sold to more than 1.000 worldwide customers since general availability in November 2010. The products supports 10 GbE ports in new control enclosure models and this can increase ISCSI thoughput by up to 700 per cent. High performance 2.5-inch 146 GB, 15.000 rpm SAS drive is available and provides up to 30 per cent faster thoughput. Original disk drives choice is 2.5-inch 10.000 rpm SAS drives comes with 300, 450, 600 and 300 GB E-MLC SSD (enterprise-grade multilevel cell). Also 3.5-inch 2 TB 7.2K rpm Near-Line SAS disk is available. Two V7000 control enclosures (2U rack-mountable chasses) have Eight 8 Gbps FC host ports, Four 1 Gbps and optionally Four 10 Gbps ISCSI hosts ports. Per control enclosure 16 GB cache memory.

 

Two V7000 control enclosures can now be clustered to another V7000 control enclosure with IBM Storwize V7000 software v6.2, meaning doubling the capacity of a single managed V7000 up to 480 TB. The software has had built-in real time performance monitoring fuctionality added and the FlashCopy function can be used with Remote Mirror volumes, adding more choices to high-availability scenarios. This is similar to high-end storage products like DS8000.

 

Vmware vStorage API for Array Integration is now supported, meaning the array can now do storage work offloaded from the ESX server, enabling more VMs to be hosted and run. This is one of the most important features in this release. (v6.2)

 

When a customer is buying new V7000, IBM is offering data migration features for 60 days free. Which is enough for a company can migrate all the data from another storage device to V7000 himself and/or any storage devices behind V7000 as an external storage virtualization. Storwize V7000 can be upgraded from the smallest to the largest configuration without disruption. Existing V7000s can participate in clusters via a non-disruptive software upgrade to v6.2 and a cluster is managed as a single system. Once clustering is enabled then expansion enclosures can be added to scale capacity and/or a second control enclosure can be added to boost performance.

 

There is a Storwize V7000 plug-in for Vmware Vcenter wihch also supports virtualised external disk systems. The list of supported external systems now includes EMCs VNX, HDSs VSP, HP P9500 plus Texas  Memory Systems RamSan-620. Lastly existing model 112 and 124 control enclosures can be upgraded to add 10 GbE support. ISCSI is getting important and more important because of agility and cost approaches.

 

As a reminder, i want to marked that usually everybody want 15K rpm HDD but 10K rpm HDD is better if you have a change to balance the performance impact. So V7000 can provide a better performance for 10K rpm HDD with SSD and/or 146 GB 15K rpm HDD. Result is with the right approach and calculation, you can get better performance compare to native 15K rpm HDD.

 

IBM has not yet added compression to the V7000 and it is expected at some future date. It said in September 2010 that, within 12-18 moths, we didsee the RACE integration into IBMs block storage products; we may some time to wait.

 

Most of the new V7000 functionality will be available in June this year . There is no extra feature to order and no extra charge for clustering. The Vcenter plug-in will be available at no charge on 30 June (for v6.1 software) and 31 July (v6.2 software).

 

http://www-03.ibm.com/systems/storage/disk/storwize_v7000/index.html

 

, , ,

1 Comment

Storwize V7000 Node Canisters May Shut Down or Reboot Unexpectedly During Normal Operation

Storwize V7000 node canisters may shut down or reboot during normal operation, leading to a loss of host I/O access.

Description:

Stowize V7000 node canisters running V6.1.0.0 – V6.1.0.4 code levels may shut down without warning during normal I/O operations.

These shut down events will typically occur on both node canisters in the Storwize V7000 system, with the second node canister shutting down a number of hours after the first. Once the second node canister has shut down, this will cause a loss of host access to disks presented by the Storwize V7000, until at least one of the node canisters has been manually brought back online.

Workaround:

If this issue is encountered on V6.1.0.0 – V6.1.0.4, the recovery action is to reseat each offline node canister in order to bring it back online.

Partial Fix Introduced in V6.1.0.5

A partial fix was introduced in V6.1.0.5, which caused node canisters that experienced this condition to reboot and automatically resume I/O operations, rather than shut down and remain offline. Customers running V6.1.0.5 code are however still exposed to the risk of both node canisters rebooting at the same time, which could lead to a short, temporary outage to host I/O.

Complete Fix:

This issue has been fully resolved by APAR IC74088 in the V6.1.0.6 release. Please visit the following URL to download the latest V6.1.0.x code:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003748&myns=s028&mynp=OCST3FR7&mync=E

, , ,

Leave a comment