Archive for category Stor-General

Tivoli Storage Manager sometimes triggers frequent reorganization of a collection database

The Tivoli Storage Manager version that is used in Information Archive can sometimes trigger frequent reorganization of the collection database. This situation can degrade the performance of the database.

Cause

The frequent attempts to reorganize the database occur because the automatic reorganization does not succeed.

Diagnosing the problem

Log on to the administrative interface with a user ID that has the tsmAdministrator role.

  1. Expand the ‘Tivoli Storage Manager’ menu item, select Manage Servers and select the server to be checked.
  2. On the ‘Server Properties’ page select ‘Activity Log’ and press Update Table.

If the problem exists, the activity log contains frequently logs lines similar to the following example, for example every 10 minutes:

2011-05-06 00:57:21 ANR0293I Reorganization for table AF.Clusters started.
2011-05-06 00:57:26 ANR0294I Reorganization for table AF.Clusters ended.

 

Resolving the problem

If the problem occurs, you must switch off automatic reorganization temporarily and initiate a manual reorganization. The automatic reorganization is switched on again after the problem is resolved.

Notes about the procedure:

  • The steps require root access to the cluster nodes. If enhanced tamper protection is set, you must install the Emergency Support Access (ESA) patch on the appliance. To obtain the ESA patch, go to the following website:
    https://w3.tap.ibm.com/w3ki02/display/TIAM/ESA+Patch+instructions
  • For the purpose of this example, the user is initially logged on to cluster node server ianode1.
  • The collection in the following example is named “FILE02”. Substitute the correct collection name when you run the commands in the instructions.
  • All examples include the command prompt text and the expected results.

Complete the following steps to resolve the problem:
1. From the KVM console, log onto a cluster node server and change to the root user:

iaadmin@ianode1:~> su
Password:
2. Change to the /tsm directory for the collection and back up the dsmserv.opt file:

ianode1:~ # cd /tiam/FILE02/tsm/
ianode1:/tiam/FILE02/tsm # cp dsmserv.opt dsmserv.opt.orig
3. Append “ALLOWREORGTABLE NO” to the end of the dsmserv.opt file:

ianode1:/tiam/FILE02/tsm # echo "" >> dsmserv.opt
ianode1:/tiam/FILE02/tsm # echo "ALLOWREORGTABLE NO" >> dsmserv.opt
4. Log on to the Information Archive administrative interface:

    1. Ensure that there is no collection I/O activity and suspend the collection by clickingInformation Archive > System Overview > Collections, and the Suspend collection icon.
    2. Resume the collection. The new “ALLOWREORGTABLE” server setting is now active.

5. Change back to the cluster node and find the DB2 instance user for the collection.

You can complete this action on any of the cluster nodes:

ianode1:/tiam/FILE02/tsm # ls -d /tiam/*/tsm/u*

Sample output:

/tiam/FILE02/tsm/u1

In the example, the DB2 instance user is “u1”.
6. Locate the cluster node where the Tivoli Storage Manager server is currently running:

ianode1:/tiam/FILE02/tsm # mb_display_location.py -r -t
7. Locate the line containing “tsm” and the collection name in the output.

In this example, the Tivoli Storage Manager server for collection “FILE02” is running on ianode2.

Sample output:

start of /opt/tivoli/tiam/bin/mb_display_location.py.
mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

returned from /opt/tivoli/tiam/bin/mb_display_location.py:
9.155.104.9|ianode1|||
9.155.104.12|ianode1|ctdb||
9.155.104.10|ianode2|||
9.155.104.11|ianode2|ctdb||
172.31.4.1|ianode2|tsm|FILE02|
end of /opt/tivoli/tiam/bin/mb_display_location.py. (None)

8. If the Tivoli Storage Manager server is running on a different cluster node than where you are currently logged on as root, log on to the cluster node where the Tivoli Storage Manager server is running.

ianode1:/var/opt/tivoli/tiam/log # ssh ianode2
Last login: Fri Apr 29 10:39:26 2011 from ianode1
9. Change the properties of the DB2 database, by completing the following steps:

a. Change to the DB2 instance user and run the following command:

ianode2:~ # su - u1
b. Run the following command to “source” the DB2 profile:

u1@ianode2:~> .  ~/sqllib/db2profile
c. Connect to the Tivoli Storage Manager database:

u1@ianode2:~> db2 connect to TSMDB1
Expected result:

Database Connection Information

Database server        = DB2/LINUXX8664 9.5.5
SQL authorization ID   = U1
Local database alias   = TSMDB1

d. Manually reorganize the database, by running the following command:
u1@ianode2:~> db2 reorg table TSMDB1.AF_CLUSTERS

Expected result:

DB20000I  The REORG command completed successfully.
e. Run the DB2 “RUNSTATS” command:

u1@ianode2:~> db2 RUNSTATS ON TABLE TSMDB1.AF_CLUSTERS AND SAMPLED DETAILED INDEXES ALL

Expected result:

DB20000I  The RUNSTATS command completed successfully.
f. Exit from the DB2 instance user, by running the following command:

u1@ianode2:~> exit

Expected results:

logout
g. If you changed to a different cluster node server to run the DB2 commands, change back to the cluster node where you were originally logged on, by running the following command:

ianode2:~ # exit
Expected results:

logout
Connection to ianode2 closed.
10. Restore the backup of the dsmserv.opt file:

ianode1:/var/opt/tivoli/tiam/log # cp dsmserv.opt.orig dsmserv.opt
11. Change back to the administrative interface, and complete the following steps:

    1. Suspend the collection.
    2. Resume the collection.

The original Tivoli Storage Manager server setting is now active. The automatic database reorganization is switched on again.

Related information

TSM database considerations

 

 

Advertisements

, ,

Leave a comment

IBM XIV Host Attachment Kit v.1.6 for AIX,SLES,HP-UX,RHEL,Windows,Solaris released

IBM XIV Host Attachment Kit for AIX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for AIX is a software pack that simplifies the task of connecting an IBM AIX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native AIX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for SLES, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for SLES is a software pack that simplifies the task of connecting an SLES host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native SLES multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for HP-UX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for HP-UX is a software pack that simplifies the task of connecting an HP-UX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native HP-UX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for RHEL, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for RHEL is a software pack that simplifies the task of connecting a RHEL host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native RHEL multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Windows, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Windows is a software pack that simplifies the task of connecting a Microsoft Windows Server host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Windows Server multipath configuration. To further help the administrator, the HAK also installs required Windows Server hotfixes, and checks the compatibility of the installed HBA drivers as it was known when this HAK was released. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Solaris, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Solaris is a software pack that simplifies the task of connecting an Solaris host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Solaris multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

 

, , ,

Leave a comment

IBM disk systems- Shutting down a cluster node server in the administrative interface does not work

When you shut down a cluster node server in the administrative interface, the final status of the server is listed as “Running Maintenance Mode” instead of the expected status of “Stopped”.

Symptom

After you shut down the cluster node server in the administrative interface, the status “Shutting Down” is displayed for a while and then changes to “Running Maintenance Mode” instead of “Stopped”. You can still reach the cluster node server by using ping and SSH.

Cause

The administrative interface uses the power control adapter of the cluster node server to initiate a clean shutdown of the server. If the power control adapter is not working correctly, or if the operating system on the node hangs during shutdown, the server might stay in a running state with a part of the processes stopped. The administrative interface detects this situation as a state similar to “Running Maintenance Mode”.

 

Resolving the problem

Complete the following steps to resolve the problem:

  1. Force a shutdown of the cluster node server. This method works if the operating system is preventing the server from shutting down correctly. Complete the following steps:
    1. Log on to the management console server with the iaadmin ID and password.
    2. Run the following command, where ianodeX is the name of the cluster node server:
      ia_powercontrol -d ianodeX -N
  2. To check if the cluster node server is shut down, run the following command as iaadmin on the management console server, where ianodeX is the name of the server:
    ia_powercontrol -s ianodeX
    The server might take a few minutes to shut down. The command returns the following message after a successful shutdown:
    Node attached to power control hardware at 'ianodeX' is not powered on.
  3. If the ia_powercontrol command returns error messages, the power control adapter might not be working correctly. Complete the following steps to reset the power control adapter:
    1. Log on to the power control adapter, by running the following command where ianodeXis the name of the cluster node server: telnet pwrctl-ianodeX
    2. At the login: prompt, enter the following ID in upper-case characters: USERID
    3. At the Password: prompt, enter the following password in upper-case characters, using a zero instead of an “O” character: PASSW0RD
    4. At the system> prompt, run the following command: resetsp
    5. Log off from the power control adapter, by running the following command: exit
    6. Wait for 10 minutes so that the power control adapter can finish booting.
    7. Run the ia_powercontrol command again to force a shutdown.
  4. If you cannot shut down the cluster node server by using the power control adapater, log onto the server as root and run the following command: shutdown -h now
    Note: If the cluster node server is running in ‘Enhanced Tamper Protection’ mode, you must request and install an Emergency Support Access (ESA) patch to set up root access to the server.
  5. If you cannot shut down the cluster node server by using any of the previous steps, press the server power button for several seconds to stop the server.

https://www-304.ibm.com/support/docview.wss?mynp=OCSSS9C9&mync=E&uid=swg21499443&myns=s028

 

, , ,

Leave a comment

Storwize V7000 new features been released

Storwize V7000 was introduced in October 2010 in IBM Storage Mid-range portfolio. V7000 archived one of the fastest product ramps in IBM history with more than 1.800 systems sold to more than 1.000 worldwide customers since general availability in November 2010. The products supports 10 GbE ports in new control enclosure models and this can increase ISCSI thoughput by up to 700 per cent. High performance 2.5-inch 146 GB, 15.000 rpm SAS drive is available and provides up to 30 per cent faster thoughput. Original disk drives choice is 2.5-inch 10.000 rpm SAS drives comes with 300, 450, 600 and 300 GB E-MLC SSD (enterprise-grade multilevel cell). Also 3.5-inch 2 TB 7.2K rpm Near-Line SAS disk is available. Two V7000 control enclosures (2U rack-mountable chasses) have Eight 8 Gbps FC host ports, Four 1 Gbps and optionally Four 10 Gbps ISCSI hosts ports. Per control enclosure 16 GB cache memory.

 

Two V7000 control enclosures can now be clustered to another V7000 control enclosure with IBM Storwize V7000 software v6.2, meaning doubling the capacity of a single managed V7000 up to 480 TB. The software has had built-in real time performance monitoring fuctionality added and the FlashCopy function can be used with Remote Mirror volumes, adding more choices to high-availability scenarios. This is similar to high-end storage products like DS8000.

 

Vmware vStorage API for Array Integration is now supported, meaning the array can now do storage work offloaded from the ESX server, enabling more VMs to be hosted and run. This is one of the most important features in this release. (v6.2)

 

When a customer is buying new V7000, IBM is offering data migration features for 60 days free. Which is enough for a company can migrate all the data from another storage device to V7000 himself and/or any storage devices behind V7000 as an external storage virtualization. Storwize V7000 can be upgraded from the smallest to the largest configuration without disruption. Existing V7000s can participate in clusters via a non-disruptive software upgrade to v6.2 and a cluster is managed as a single system. Once clustering is enabled then expansion enclosures can be added to scale capacity and/or a second control enclosure can be added to boost performance.

 

There is a Storwize V7000 plug-in for Vmware Vcenter wihch also supports virtualised external disk systems. The list of supported external systems now includes EMCs VNX, HDSs VSP, HP P9500 plus Texas  Memory Systems RamSan-620. Lastly existing model 112 and 124 control enclosures can be upgraded to add 10 GbE support. ISCSI is getting important and more important because of agility and cost approaches.

 

As a reminder, i want to marked that usually everybody want 15K rpm HDD but 10K rpm HDD is better if you have a change to balance the performance impact. So V7000 can provide a better performance for 10K rpm HDD with SSD and/or 146 GB 15K rpm HDD. Result is with the right approach and calculation, you can get better performance compare to native 15K rpm HDD.

 

IBM has not yet added compression to the V7000 and it is expected at some future date. It said in September 2010 that, within 12-18 moths, we didsee the RACE integration into IBMs block storage products; we may some time to wait.

 

Most of the new V7000 functionality will be available in June this year . There is no extra feature to order and no extra charge for clustering. The Vcenter plug-in will be available at no charge on 30 June (for v6.1 software) and 31 July (v6.2 software).

 

http://www-03.ibm.com/systems/storage/disk/storwize_v7000/index.html

 

, , ,

1 Comment

IBM XIV Storage Plug-in for Microsoft Cluster Server (MSCS), Version 1.0.2.1

The plug-in enables failover automation of XIV storage services that run on two geographically dispersed cluster nodes, enabling deployment of MSCS in a geo-cluster configuration.

 

MSCS and the XIV storage system support one-button fail over initiation for automated recovery, or manual fail over for step-by-step control over the recovery process. Both accommodate a broad range of fail over scenarios and infrastructure components. Version 1.0.2.1 includes a hotfix for a known issue.

 

http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000869&myns=s028&mynp=OCSTJTAG&mync=E

, , ,

2 Comments

To avoid potential loss of access, customers must be on "IBM XIV Host Attachment Kit for Windows, Version 1.5.3" or above

To avoid potential loss of access that might happen during XIV operation or hot upgrade customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above and before the upgrade

Symptom

Servers disconnect from XIV

Cause

  1. During some cases of module failure (e.g. SMI timeout) Windows 2003 server might disconnect from XIV
  2. During XIV hot upgrade Windows 2003 server might disconnect from XIV
  3. Extreme steady state situations

 

Environment

Windows 2003 or Windows 2003 R2 in a cluster environment connected to XIV

 

Resolving the problem

In a Windows 2003 or Windows 2003 R2 in a cluster environment, customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above.

This release of “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” contains a fix to avoid potential loss of access.
Here is the link to download “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” :

http://www-01.ibm.com/support/docview.wss?rs=1319&context=STJTAG

&context=HW3E0&dc=D400&q1=ssg1*&uid=ssg1S4000795&loc=en_US&cs=utf-8&lang=en

 

, ,

2 Comments

Storwize V7000 Node Canisters May Shut Down or Reboot Unexpectedly During Normal Operation

Storwize V7000 node canisters may shut down or reboot during normal operation, leading to a loss of host I/O access.

Description:

Stowize V7000 node canisters running V6.1.0.0 – V6.1.0.4 code levels may shut down without warning during normal I/O operations.

These shut down events will typically occur on both node canisters in the Storwize V7000 system, with the second node canister shutting down a number of hours after the first. Once the second node canister has shut down, this will cause a loss of host access to disks presented by the Storwize V7000, until at least one of the node canisters has been manually brought back online.

Workaround:

If this issue is encountered on V6.1.0.0 – V6.1.0.4, the recovery action is to reseat each offline node canister in order to bring it back online.

Partial Fix Introduced in V6.1.0.5

A partial fix was introduced in V6.1.0.5, which caused node canisters that experienced this condition to reboot and automatically resume I/O operations, rather than shut down and remain offline. Customers running V6.1.0.5 code are however still exposed to the risk of both node canisters rebooting at the same time, which could lead to a short, temporary outage to host I/O.

Complete Fix:

This issue has been fully resolved by APAR IC74088 in the V6.1.0.6 release. Please visit the following URL to download the latest V6.1.0.x code:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003748&myns=s028&mynp=OCST3FR7&mync=E

, , ,

Leave a comment