Archive for category Sys-IBM
The Tivoli Storage Manager version that is used in Information Archive can sometimes trigger frequent reorganization of the collection database. This situation can degrade the performance of the database.
The frequent attempts to reorganize the database occur because the automatic reorganization does not succeed.
Diagnosing the problem
Log on to the administrative interface with a user ID that has the tsmAdministrator role.
- Expand the ‘Tivoli Storage Manager’ menu item, select Manage Servers and select the server to be checked.
- On the ‘Server Properties’ page select ‘Activity Log’ and press Update Table.
If the problem exists, the activity log contains frequently logs lines similar to the following example, for example every 10 minutes:
2011-05-06 00:57:21 ANR0293I Reorganization for table AF.Clusters started.
2011-05-06 00:57:26 ANR0294I Reorganization for table AF.Clusters ended.
Resolving the problem
If the problem occurs, you must switch off automatic reorganization temporarily and initiate a manual reorganization. The automatic reorganization is switched on again after the problem is resolved.
Notes about the procedure:
- The steps require root access to the cluster nodes. If enhanced tamper protection is set, you must install the Emergency Support Access (ESA) patch on the appliance. To obtain the ESA patch, go to the following website:
- For the purpose of this example, the user is initially logged on to cluster node server ianode1.
- The collection in the following example is named “FILE02”. Substitute the correct collection name when you run the commands in the instructions.
- All examples include the command prompt text and the expected results.
Complete the following steps to resolve the problem:
1. From the KVM console, log onto a cluster node server and change to the root user:
2. Change to the /tsm directory for the collection and back up the dsmserv.opt file:
ianode1:~ # cd /tiam/FILE02/tsm/
ianode1:/tiam/FILE02/tsm # cp dsmserv.opt dsmserv.opt.orig
3. Append “ALLOWREORGTABLE NO” to the end of the dsmserv.opt file:
ianode1:/tiam/FILE02/tsm # echo "" >> dsmserv.opt
ianode1:/tiam/FILE02/tsm # echo "ALLOWREORGTABLE NO" >> dsmserv.opt
4. Log on to the Information Archive administrative interface:
- Ensure that there is no collection I/O activity and suspend the collection by clickingInformation Archive > System Overview > Collections, and the Suspend collection icon.
- Resume the collection. The new “ALLOWREORGTABLE” server setting is now active.
5. Change back to the cluster node and find the DB2 instance user for the collection.
You can complete this action on any of the cluster nodes:
ianode1:/tiam/FILE02/tsm # ls -d /tiam/*/tsm/u*
In the example, the DB2 instance user is “u1”.
6. Locate the cluster node where the Tivoli Storage Manager server is currently running:
ianode1:/tiam/FILE02/tsm # mb_display_location.py -r -t
7. Locate the line containing “tsm” and the collection name in the output.
In this example, the Tivoli Storage Manager server for collection “FILE02” is running on ianode2.
start of /opt/tivoli/tiam/bin/mb_display_location.py.
GPFS nodeset Node list
ianode1 ianode1 ianode2
GPFS nodeset Node list
ianode1 ianode1 ianode2
returned from /opt/tivoli/tiam/bin/mb_display_location.py:
end of /opt/tivoli/tiam/bin/mb_display_location.py. (None)
8. If the Tivoli Storage Manager server is running on a different cluster node than where you are currently logged on as root, log on to the cluster node where the Tivoli Storage Manager server is running.
ianode1:/var/opt/tivoli/tiam/log # ssh ianode2
Last login: Fri Apr 29 10:39:26 2011 from ianode1
9. Change the properties of the DB2 database, by completing the following steps:
a. Change to the DB2 instance user and run the following command:
ianode2:~ # su - u1
b. Run the following command to “source” the DB2 profile:
u1@ianode2:~> . ~/sqllib/db2profile
c. Connect to the Tivoli Storage Manager database:
u1@ianode2:~> db2 connect to TSMDB1
Database Connection Information
Database server = DB2/LINUXX8664 9.5.5
SQL authorization ID = U1
Local database alias = TSMDB1
d. Manually reorganize the database, by running the following command:
u1@ianode2:~> db2 reorg table TSMDB1.AF_CLUSTERS
DB20000I The REORG command completed successfully.
e. Run the DB2 “RUNSTATS” command:
u1@ianode2:~> db2 RUNSTATS ON TABLE TSMDB1.AF_CLUSTERS AND SAMPLED DETAILED INDEXES ALL
DB20000I The RUNSTATS command completed successfully.
f. Exit from the DB2 instance user, by running the following command:
g. If you changed to a different cluster node server to run the DB2 commands, change back to the cluster node where you were originally logged on, by running the following command:
ianode2:~ # exit
Connection to ianode2 closed.
10. Restore the backup of the dsmserv.opt file:
ianode1:/var/opt/tivoli/tiam/log # cp dsmserv.opt.orig dsmserv.opt
11. Change back to the administrative interface, and complete the following steps:
- Suspend the collection.
- Resume the collection.
The original Tivoli Storage Manager server setting is now active. The automatic database reorganization is switched on again.
|This document contains information related to obtaining the IBM Storage Management Pack for Microsoft System Center Operations Manager v1.1.0.
The package is a set of software modules, or management packs, which allow you to access and monitor IBM storage systems Storwize V7000, SVC, XIV and DS8000, using the host-based Microsoft SCOM interface. Please refer to the release notes and user guide for detailed storage devices’ version support.
To avoid potential loss of access, customers must be on "IBM XIV Host Attachment Kit for Windows, Version 1.5.3" or above
To avoid potential loss of access that might happen during XIV operation or hot upgrade customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above and before the upgrade
Servers disconnect from XIV
- During some cases of module failure (e.g. SMI timeout) Windows 2003 server might disconnect from XIV
- During XIV hot upgrade Windows 2003 server might disconnect from XIV
- Extreme steady state situations
Windows 2003 or Windows 2003 R2 in a cluster environment connected to XIV
Resolving the problem
In a Windows 2003 or Windows 2003 R2 in a cluster environment, customers must be on “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” or above.
This release of “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” contains a fix to avoid potential loss of access.
Here is the link to download “IBM XIV Host Attachment Kit for Windows, Version 1.5.3” :
Storwize V7000 node canisters may shut down or reboot during normal operation, leading to a loss of host I/O access.
Stowize V7000 node canisters running V126.96.36.199 – V188.8.131.52 code levels may shut down without warning during normal I/O operations.
These shut down events will typically occur on both node canisters in the Storwize V7000 system, with the second node canister shutting down a number of hours after the first. Once the second node canister has shut down, this will cause a loss of host access to disks presented by the Storwize V7000, until at least one of the node canisters has been manually brought back online.
If this issue is encountered on V184.108.40.206 – V220.127.116.11, the recovery action is to reseat each offline node canister in order to bring it back online.
Partial Fix Introduced in V18.104.22.168
A partial fix was introduced in V22.214.171.124, which caused node canisters that experienced this condition to reboot and automatically resume I/O operations, rather than shut down and remain offline. Customers running V126.96.36.199 code are however still exposed to the risk of both node canisters rebooting at the same time, which could lead to a short, temporary outage to host I/O.
This issue has been fully resolved by APAR IC74088 in the V188.8.131.52 release. Please visit the following URL to download the latest V6.1.0.x code:
IBM 46M6049 ürün kodlu kart takılı bir sunucuya Vmware ESX 4.1 kurulduğunda, ESX 4.1 hba kartını görmüyor. Bunun için vmware in web sitesinden (http://downloads.vmware.com/d/details/esx4_brocade_fcoe_dt/ZHcqYnRlZUBidGR3) ilgili driver ları download edip kurmak gerekiyor. Vmware ESX işletim sisteminin her ne kadar geniş bir donanım desteği olsa da ESX kurduktan sonra mutlaka ekstra driver ihtiyacı olup olmadığı kontrol edilmeli. Zaman zaman eksik driver problemi sanki donanım arızası gibi algılanabiliyor.
H196488: DS3500/DS3950/DS5000 systems not working with Brocade on 8 Gbps host ports – IBM System Storage
IBM sonunda DS3500/DS3950 ve DS5000 serisi ürünleri için; bu ürünlerin Brocade fiber switch bağlantısının default olarak çalışmaması ile ilgili makalesini 02.02.2011 tarihinde yayınladı. Aşağıda linkini bulabilirsiniz. Ben bu bilgiyi 25.11.2010 de ( https://cemguneyli.com.tr/?p=146 ) yayınladığım makalemde paylaşmıştım. Eğer adı geçen ürünlerden birinin kurulumu yapıyorsanız ve Brocade fiber switch kullancaksanız öncelikle switch üzerinde belirtilen port ayarlarını (her bir port için tek tek) yapmanız gerekiyor. Aksi taktirde storage den gelen linkler online olmuyor.
Ne diyeyim, 3 ayı geçen bir süreden sonra sonunda makale yayınlandı.
Geç olsun ama güç olmasın…