Archive for category Sys-IBM

Tivoli Storage Manager sometimes triggers frequent reorganization of a collection database

The Tivoli Storage Manager version that is used in Information Archive can sometimes trigger frequent reorganization of the collection database. This situation can degrade the performance of the database.

Cause

The frequent attempts to reorganize the database occur because the automatic reorganization does not succeed.

Diagnosing the problem

Log on to the administrative interface with a user ID that has the tsmAdministrator role.

  1. Expand the ‘Tivoli Storage Manager’ menu item, select Manage Servers and select the server to be checked.
  2. On the ‘Server Properties’ page select ‘Activity Log’ and press Update Table.

If the problem exists, the activity log contains frequently logs lines similar to the following example, for example every 10 minutes:

2011-05-06 00:57:21 ANR0293I Reorganization for table AF.Clusters started.
2011-05-06 00:57:26 ANR0294I Reorganization for table AF.Clusters ended.

 

Resolving the problem

If the problem occurs, you must switch off automatic reorganization temporarily and initiate a manual reorganization. The automatic reorganization is switched on again after the problem is resolved.

Notes about the procedure:

  • The steps require root access to the cluster nodes. If enhanced tamper protection is set, you must install the Emergency Support Access (ESA) patch on the appliance. To obtain the ESA patch, go to the following website:
    https://w3.tap.ibm.com/w3ki02/display/TIAM/ESA+Patch+instructions
  • For the purpose of this example, the user is initially logged on to cluster node server ianode1.
  • The collection in the following example is named “FILE02”. Substitute the correct collection name when you run the commands in the instructions.
  • All examples include the command prompt text and the expected results.

Complete the following steps to resolve the problem:
1. From the KVM console, log onto a cluster node server and change to the root user:

iaadmin@ianode1:~> su
Password:
2. Change to the /tsm directory for the collection and back up the dsmserv.opt file:

ianode1:~ # cd /tiam/FILE02/tsm/
ianode1:/tiam/FILE02/tsm # cp dsmserv.opt dsmserv.opt.orig
3. Append “ALLOWREORGTABLE NO” to the end of the dsmserv.opt file:

ianode1:/tiam/FILE02/tsm # echo "" >> dsmserv.opt
ianode1:/tiam/FILE02/tsm # echo "ALLOWREORGTABLE NO" >> dsmserv.opt
4. Log on to the Information Archive administrative interface:

    1. Ensure that there is no collection I/O activity and suspend the collection by clickingInformation Archive > System Overview > Collections, and the Suspend collection icon.
    2. Resume the collection. The new “ALLOWREORGTABLE” server setting is now active.

5. Change back to the cluster node and find the DB2 instance user for the collection.

You can complete this action on any of the cluster nodes:

ianode1:/tiam/FILE02/tsm # ls -d /tiam/*/tsm/u*

Sample output:

/tiam/FILE02/tsm/u1

In the example, the DB2 instance user is “u1”.
6. Locate the cluster node where the Tivoli Storage Manager server is currently running:

ianode1:/tiam/FILE02/tsm # mb_display_location.py -r -t
7. Locate the line containing “tsm” and the collection name in the output.

In this example, the Tivoli Storage Manager server for collection “FILE02” is running on ianode2.

Sample output:

start of /opt/tivoli/tiam/bin/mb_display_location.py.
mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

returned from /opt/tivoli/tiam/bin/mb_display_location.py:
9.155.104.9|ianode1|||
9.155.104.12|ianode1|ctdb||
9.155.104.10|ianode2|||
9.155.104.11|ianode2|ctdb||
172.31.4.1|ianode2|tsm|FILE02|
end of /opt/tivoli/tiam/bin/mb_display_location.py. (None)

8. If the Tivoli Storage Manager server is running on a different cluster node than where you are currently logged on as root, log on to the cluster node where the Tivoli Storage Manager server is running.

ianode1:/var/opt/tivoli/tiam/log # ssh ianode2
Last login: Fri Apr 29 10:39:26 2011 from ianode1
9. Change the properties of the DB2 database, by completing the following steps:

a. Change to the DB2 instance user and run the following command:

ianode2:~ # su - u1
b. Run the following command to “source” the DB2 profile:

u1@ianode2:~> .  ~/sqllib/db2profile
c. Connect to the Tivoli Storage Manager database:

u1@ianode2:~> db2 connect to TSMDB1
Expected result:

Database Connection Information

Database server        = DB2/LINUXX8664 9.5.5
SQL authorization ID   = U1
Local database alias   = TSMDB1

d. Manually reorganize the database, by running the following command:
u1@ianode2:~> db2 reorg table TSMDB1.AF_CLUSTERS

Expected result:

DB20000I  The REORG command completed successfully.
e. Run the DB2 “RUNSTATS” command:

u1@ianode2:~> db2 RUNSTATS ON TABLE TSMDB1.AF_CLUSTERS AND SAMPLED DETAILED INDEXES ALL

Expected result:

DB20000I  The RUNSTATS command completed successfully.
f. Exit from the DB2 instance user, by running the following command:

u1@ianode2:~> exit

Expected results:

logout
g. If you changed to a different cluster node server to run the DB2 commands, change back to the cluster node where you were originally logged on, by running the following command:

ianode2:~ # exit
Expected results:

logout
Connection to ianode2 closed.
10. Restore the backup of the dsmserv.opt file:

ianode1:/var/opt/tivoli/tiam/log # cp dsmserv.opt.orig dsmserv.opt
11. Change back to the administrative interface, and complete the following steps:

    1. Suspend the collection.
    2. Resume the collection.

The original Tivoli Storage Manager server setting is now active. The automatic database reorganization is switched on again.

Related information

TSM database considerations

 

 

Advertisements

, ,

Leave a comment

IBM released Storage Management Pack 1.1.0 for SCOM

This document contains information related to obtaining the IBM Storage Management Pack for Microsoft System Center Operations Manager v1.1.0. 

 

Download Description

The package is a set of software modules, or management packs, which allow you to access and monitor IBM storage systems Storwize V7000, SVC, XIV and DS8000, using the host-based Microsoft SCOM interface. Please refer to the release notes and user guide for detailed storage devices’ version support.

 

Prerequisites

URL LANGUAGE SIZE(Bytes)
IBM Storage Management Pack Release Notes v1.1.0 English 136588
Installation Instructions
URL LANGUAGE SIZE(Bytes)
IBM Storage Management Pack User Guide v1.1.0 English 1336807
Download package
DESCRIPTION DOCUMENTATION Download Options
Platform Windows Not Applicable
English
Byte Size 42075880
Date 1-5-2011



IBM Storage MP v1.1.0 32 bits FTP
Platform Windows Not Applicable
English
Byte Size 36438392
Date 1-5-2011



IBM Storage MP v1.1.0 64 bits FTP
Cross Reference information
Segment Product Component Platform Version Edition
Disk Storage Systems System Storage DS8700


https://www-304.ibm.com/support/docview.wss?mynp=OCSTUVMB&mync=E&uid=ssg1S4000937&myns=s028

 

,

Leave a comment

To avoid potential loss of access, customers must be on "IBM XIV Host Attachment Kit for Windows, Version 1.5.3" or above

To avoid potential loss of access that might happen during XIV operation or hot upgrade customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above and before the upgrade

Symptom

Servers disconnect from XIV

Cause

  1. During some cases of module failure (e.g. SMI timeout) Windows 2003 server might disconnect from XIV
  2. During XIV hot upgrade Windows 2003 server might disconnect from XIV
  3. Extreme steady state situations

 

Environment

Windows 2003 or Windows 2003 R2 in a cluster environment connected to XIV

 

Resolving the problem

In a Windows 2003 or Windows 2003 R2 in a cluster environment, customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above.

This release of “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” contains a fix to avoid potential loss of access.
Here is the link to download “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” :

http://www-01.ibm.com/support/docview.wss?rs=1319&context=STJTAG

&context=HW3E0&dc=D400&q1=ssg1*&uid=ssg1S4000795&loc=en_US&cs=utf-8&lang=en

 

, ,

2 Comments

Storwize V7000 Node Canisters May Shut Down or Reboot Unexpectedly During Normal Operation

Storwize V7000 node canisters may shut down or reboot during normal operation, leading to a loss of host I/O access.

Description:

Stowize V7000 node canisters running V6.1.0.0 – V6.1.0.4 code levels may shut down without warning during normal I/O operations.

These shut down events will typically occur on both node canisters in the Storwize V7000 system, with the second node canister shutting down a number of hours after the first. Once the second node canister has shut down, this will cause a loss of host access to disks presented by the Storwize V7000, until at least one of the node canisters has been manually brought back online.

Workaround:

If this issue is encountered on V6.1.0.0 – V6.1.0.4, the recovery action is to reseat each offline node canister in order to bring it back online.

Partial Fix Introduced in V6.1.0.5

A partial fix was introduced in V6.1.0.5, which caused node canisters that experienced this condition to reboot and automatically resume I/O operations, rather than shut down and remain offline. Customers running V6.1.0.5 code are however still exposed to the risk of both node canisters rebooting at the same time, which could lead to a short, temporary outage to host I/O.

Complete Fix:

This issue has been fully resolved by APAR IC74088 in the V6.1.0.6 release. Please visit the following URL to download the latest V6.1.0.x code:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003748&myns=s028&mynp=OCST3FR7&mync=E

, , ,

Leave a comment

IBM 46M6049 8 Gb FC HBA with Vmware ESX 4.1

IBM 46M6049 ürün kodlu kart takılı bir sunucuya Vmware ESX 4.1 kurulduğunda, ESX 4.1 hba kartını görmüyor. Bunun için vmware in web sitesinden (http://downloads.vmware.com/d/details/esx4_brocade_fcoe_dt/ZHcqYnRlZUBidGR3) ilgili driver ları download edip kurmak gerekiyor. Vmware ESX işletim sisteminin her ne kadar geniş bir donanım desteği olsa da ESX kurduktan sonra mutlaka ekstra driver ihtiyacı olup olmadığı kontrol edilmeli. Zaman zaman eksik driver problemi sanki donanım arızası gibi algılanabiliyor.

Kolay gelsin.

, , , ,

1 Comment

H196488: DS3500/DS3950/DS5000 systems not working with Brocade on 8 Gbps host ports – IBM System Storage

IBM sonunda DS3500/DS3950 ve DS5000 serisi ürünleri için; bu ürünlerin Brocade fiber switch bağlantısının default olarak çalışmaması ile ilgili makalesini 02.02.2011 tarihinde yayınladı. Aşağıda linkini bulabilirsiniz. Ben bu bilgiyi 25.11.2010 de ( https://cemguneyli.com.tr/?p=146 ) yayınladığım makalemde paylaşmıştım.  Eğer adı geçen ürünlerden birinin kurulumu yapıyorsanız ve Brocade fiber switch kullancaksanız öncelikle switch üzerinde belirtilen port ayarlarını (her bir port için tek tek) yapmanız gerekiyor. Aksi taktirde storage den gelen linkler online olmuyor.

http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5083089&brandind=5000028&myns=s028&mync=E

Ne diyeyim, 3 ayı geçen bir süreden sonra sonunda makale yayınlandı.
Geç olsun ama güç olmasın…

, ,

1 Comment