Archive for category News

2 new technical paper from vmware related to vmware ESXI

Vmware released 2 new technical paper on 06/07/2011 which related to ESXI operations and migrations. You can find official download link below;

 

Migrating to Vmware ESXI

http://www.vmware.com/files/pdf/VMware-ESXi-41-Migration-Guide-TWP.pdf

 

Vmware ESXI 4.1 Operations Guide

– http://www.vmware.com/files/pdf/VMware-ESXi-41-Operations-Guide-TWP.pdf

 

Advertisements

, , ,

Leave a comment

vCenter Configuration Manager login fails with error: Your ID has either not been created in VCM, or you have no current VCM roles

Symptoms

vCenter Configuration Manager Console login page appears but when the Login button is clicked you receive this error message:
Your ID has either not been created in VCM, or you have no current VCM roles.
Error Message:
Unknown Error

Resolution

This issue occurs if the advanced SQL configuration option Ole Automation Procedures is disabled.

The VCM console relies heavily on  SQL servers ability to create automation objects. To resolve this issue, run these statements in SSMS:
sp_configure ‘show advanced options’, 1
GO
RECONFIGURE;
GO
sp_configure ‘Ole Automation Procedures’, 1
GO
RECONFIGURE;
GO
sp_configure ‘show advanced options’, 0
GO
RECONFIGURE;

source: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037806

, , ,

Leave a comment

Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume

If a Storwize V7000 or SAN Volume Controller FlashCopy mapping is started on Volumes greater than 2TB in size, this will result in the FlashCopy operation writing incorrect data to the target Volume. The target Volume will therefore not contain the same data as the source Volume.

Content

An issue has been discovered that will result in incorrect data being written to FlashCopy targetVolumes greater than 2TB in size.

Data on the source Volume will be unaffected by this issue. Any FlashCopy replicated Volumes less than 2TB in size will also be unaffected by this issue.

Customers are strongly advised not to perform any FlashCopy operations on Volumes greater than 2TB in size, until a fix is available and has been applied.

Customers should also be aware that any previously created FlashCopy target Volumes greater than 2TB in size will have incorrect data on them, and should be treated as inconsistent with the original source data.

Fix

This issue will be fixed in the upcoming 6.1.0.9 PTF release, and also in the next major release, 6.2.0.x.

 

Cross Reference information

Segment Product Component Platform Version
Storage Virtualization SAN Volume Controller 6.1 SAN Volume Controller 6.1
Storage Virtualization SAN Volume Controller V5.1.x SAN Volume Controller 5.1
Storage Virtualization SAN Volume Controller V4.3.x SAN Volume Controller 4.3.0, 4.3.1

 


Source: https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003840&myns=s028

 

, , ,

Leave a comment

Tivoli Storage Manager sometimes triggers frequent reorganization of a collection database

The Tivoli Storage Manager version that is used in Information Archive can sometimes trigger frequent reorganization of the collection database. This situation can degrade the performance of the database.

Cause

The frequent attempts to reorganize the database occur because the automatic reorganization does not succeed.

Diagnosing the problem

Log on to the administrative interface with a user ID that has the tsmAdministrator role.

  1. Expand the ‘Tivoli Storage Manager’ menu item, select Manage Servers and select the server to be checked.
  2. On the ‘Server Properties’ page select ‘Activity Log’ and press Update Table.

If the problem exists, the activity log contains frequently logs lines similar to the following example, for example every 10 minutes:

2011-05-06 00:57:21 ANR0293I Reorganization for table AF.Clusters started.
2011-05-06 00:57:26 ANR0294I Reorganization for table AF.Clusters ended.

 

Resolving the problem

If the problem occurs, you must switch off automatic reorganization temporarily and initiate a manual reorganization. The automatic reorganization is switched on again after the problem is resolved.

Notes about the procedure:

  • The steps require root access to the cluster nodes. If enhanced tamper protection is set, you must install the Emergency Support Access (ESA) patch on the appliance. To obtain the ESA patch, go to the following website:
    https://w3.tap.ibm.com/w3ki02/display/TIAM/ESA+Patch+instructions
  • For the purpose of this example, the user is initially logged on to cluster node server ianode1.
  • The collection in the following example is named “FILE02”. Substitute the correct collection name when you run the commands in the instructions.
  • All examples include the command prompt text and the expected results.

Complete the following steps to resolve the problem:
1. From the KVM console, log onto a cluster node server and change to the root user:

iaadmin@ianode1:~> su
Password:
2. Change to the /tsm directory for the collection and back up the dsmserv.opt file:

ianode1:~ # cd /tiam/FILE02/tsm/
ianode1:/tiam/FILE02/tsm # cp dsmserv.opt dsmserv.opt.orig
3. Append “ALLOWREORGTABLE NO” to the end of the dsmserv.opt file:

ianode1:/tiam/FILE02/tsm # echo "" >> dsmserv.opt
ianode1:/tiam/FILE02/tsm # echo "ALLOWREORGTABLE NO" >> dsmserv.opt
4. Log on to the Information Archive administrative interface:

    1. Ensure that there is no collection I/O activity and suspend the collection by clickingInformation Archive > System Overview > Collections, and the Suspend collection icon.
    2. Resume the collection. The new “ALLOWREORGTABLE” server setting is now active.

5. Change back to the cluster node and find the DB2 instance user for the collection.

You can complete this action on any of the cluster nodes:

ianode1:/tiam/FILE02/tsm # ls -d /tiam/*/tsm/u*

Sample output:

/tiam/FILE02/tsm/u1

In the example, the DB2 instance user is “u1”.
6. Locate the cluster node where the Tivoli Storage Manager server is currently running:

ianode1:/tiam/FILE02/tsm # mb_display_location.py -r -t
7. Locate the line containing “tsm” and the collection name in the output.

In this example, the Tivoli Storage Manager server for collection “FILE02” is running on ianode2.

Sample output:

start of /opt/tivoli/tiam/bin/mb_display_location.py.
mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

mmlsnode
GPFS nodeset Node list
————- ——————————————————-
ianode1 ianode1 ianode2

returned from /opt/tivoli/tiam/bin/mb_display_location.py:
9.155.104.9|ianode1|||
9.155.104.12|ianode1|ctdb||
9.155.104.10|ianode2|||
9.155.104.11|ianode2|ctdb||
172.31.4.1|ianode2|tsm|FILE02|
end of /opt/tivoli/tiam/bin/mb_display_location.py. (None)

8. If the Tivoli Storage Manager server is running on a different cluster node than where you are currently logged on as root, log on to the cluster node where the Tivoli Storage Manager server is running.

ianode1:/var/opt/tivoli/tiam/log # ssh ianode2
Last login: Fri Apr 29 10:39:26 2011 from ianode1
9. Change the properties of the DB2 database, by completing the following steps:

a. Change to the DB2 instance user and run the following command:

ianode2:~ # su - u1
b. Run the following command to “source” the DB2 profile:

u1@ianode2:~> .  ~/sqllib/db2profile
c. Connect to the Tivoli Storage Manager database:

u1@ianode2:~> db2 connect to TSMDB1
Expected result:

Database Connection Information

Database server        = DB2/LINUXX8664 9.5.5
SQL authorization ID   = U1
Local database alias   = TSMDB1

d. Manually reorganize the database, by running the following command:
u1@ianode2:~> db2 reorg table TSMDB1.AF_CLUSTERS

Expected result:

DB20000I  The REORG command completed successfully.
e. Run the DB2 “RUNSTATS” command:

u1@ianode2:~> db2 RUNSTATS ON TABLE TSMDB1.AF_CLUSTERS AND SAMPLED DETAILED INDEXES ALL

Expected result:

DB20000I  The RUNSTATS command completed successfully.
f. Exit from the DB2 instance user, by running the following command:

u1@ianode2:~> exit

Expected results:

logout
g. If you changed to a different cluster node server to run the DB2 commands, change back to the cluster node where you were originally logged on, by running the following command:

ianode2:~ # exit
Expected results:

logout
Connection to ianode2 closed.
10. Restore the backup of the dsmserv.opt file:

ianode1:/var/opt/tivoli/tiam/log # cp dsmserv.opt.orig dsmserv.opt
11. Change back to the administrative interface, and complete the following steps:

    1. Suspend the collection.
    2. Resume the collection.

The original Tivoli Storage Manager server setting is now active. The automatic database reorganization is switched on again.

Related information

TSM database considerations

 

 

, ,

Leave a comment

IBM XIV Host Attachment Kit v.1.6 for AIX,SLES,HP-UX,RHEL,Windows,Solaris released

IBM XIV Host Attachment Kit for AIX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for AIX is a software pack that simplifies the task of connecting an IBM AIX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native AIX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for SLES, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for SLES is a software pack that simplifies the task of connecting an SLES host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native SLES multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for HP-UX, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for HP-UX is a software pack that simplifies the task of connecting an HP-UX host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native HP-UX multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for RHEL, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for RHEL is a software pack that simplifies the task of connecting a RHEL host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native RHEL multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Windows, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Windows is a software pack that simplifies the task of connecting a Microsoft Windows Server host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Windows Server multipath configuration. To further help the administrator, the HAK also installs required Windows Server hotfixes, and checks the compatibility of the installed HBA drivers as it was known when this HAK was released. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

IBM XIV Host Attachment Kit for Solaris, Version 1.6

The IBM XIV Host Attachment Kit (HAK) for Solaris is a software pack that simplifies the task of connecting an Solaris host to the IBM XIV Storage System. The HAK provides a set of CLI-based tools that automatically detect any physically connected XIV storage system (single system or an array), define the host on the XIV storage system, and apply best-practice native Solaris multipath configuration. After the connection is established, XIV-based storage volumes can be mapped to the host without any additional manual configuration, and can be accessed and used from the host for a range of storage operations.

 

, , ,

Leave a comment

IBM disk systems- Shutting down a cluster node server in the administrative interface does not work

When you shut down a cluster node server in the administrative interface, the final status of the server is listed as “Running Maintenance Mode” instead of the expected status of “Stopped”.

Symptom

After you shut down the cluster node server in the administrative interface, the status “Shutting Down” is displayed for a while and then changes to “Running Maintenance Mode” instead of “Stopped”. You can still reach the cluster node server by using ping and SSH.

Cause

The administrative interface uses the power control adapter of the cluster node server to initiate a clean shutdown of the server. If the power control adapter is not working correctly, or if the operating system on the node hangs during shutdown, the server might stay in a running state with a part of the processes stopped. The administrative interface detects this situation as a state similar to “Running Maintenance Mode”.

 

Resolving the problem

Complete the following steps to resolve the problem:

  1. Force a shutdown of the cluster node server. This method works if the operating system is preventing the server from shutting down correctly. Complete the following steps:
    1. Log on to the management console server with the iaadmin ID and password.
    2. Run the following command, where ianodeX is the name of the cluster node server:
      ia_powercontrol -d ianodeX -N
  2. To check if the cluster node server is shut down, run the following command as iaadmin on the management console server, where ianodeX is the name of the server:
    ia_powercontrol -s ianodeX
    The server might take a few minutes to shut down. The command returns the following message after a successful shutdown:
    Node attached to power control hardware at 'ianodeX' is not powered on.
  3. If the ia_powercontrol command returns error messages, the power control adapter might not be working correctly. Complete the following steps to reset the power control adapter:
    1. Log on to the power control adapter, by running the following command where ianodeXis the name of the cluster node server: telnet pwrctl-ianodeX
    2. At the login: prompt, enter the following ID in upper-case characters: USERID
    3. At the Password: prompt, enter the following password in upper-case characters, using a zero instead of an “O” character: PASSW0RD
    4. At the system> prompt, run the following command: resetsp
    5. Log off from the power control adapter, by running the following command: exit
    6. Wait for 10 minutes so that the power control adapter can finish booting.
    7. Run the ia_powercontrol command again to force a shutdown.
  4. If you cannot shut down the cluster node server by using the power control adapater, log onto the server as root and run the following command: shutdown -h now
    Note: If the cluster node server is running in ‘Enhanced Tamper Protection’ mode, you must request and install an Emergency Support Access (ESA) patch to set up root access to the server.
  5. If you cannot shut down the cluster node server by using any of the previous steps, press the server power button for several seconds to stop the server.

https://www-304.ibm.com/support/docview.wss?mynp=OCSSS9C9&mync=E&uid=swg21499443&myns=s028

 

, , ,

Leave a comment

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly

Storwize V7000 Systems Running V6.1.0.0 – V6.1.0.6 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data

 

Storwize V7000 units running code levels between V6.1.0.0 and V6.1.0.6 are exposed to an issue that can result in both node canisters shutting down simultaneously and unexpectedly during normal operation.

 

Content

An issue exists in the V6.1.0.0 – V6.1.0.6 code that can result in both node canisters abruptly shutting down during normal operation, resulting in a loss of hardened configuration metadata on these nodes and the requirement to perform a manual cluster recovery process to restore the configuration. This recovery process may take up to several hours to complete.

 

Additionally, any host I/O data that was resident in the fast-write cache at the time of failure will be unrecoverable.

 

 

Fix

A workaround was introduced in the V6.1.0.7 PTF release which, although not eliminating this issue, prevents the shutdown event on one node canister from propagating to the other node canister. This is intended to prevent a double-node shutdown from occurring, resulting in any loss of host access to data, and avoiding the need for a cluster recovery process to be performed. This workaround was further improved in the V6.1.0.8 PTF release to ensure that affected nodes will recover automatically.

 

If a single node shutdown event does occur when running V6.1.0.8, this node will automatically recover and resume normal operation without requiring any manual intervention.

 

IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V6.1.0.8 to avoid an outage.

 

Please visit the following URL to download the V6.1.0.8 code:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000966

 

https://www-304.ibm.com/support/docview.wss?mynp=OCST3FR7&mync=E&uid=ssg1S1003805&myns=s028


, , , ,

Leave a comment