Tuesday, November 24, 2015

How to connect to a Workstation VM with putty.

Connecting to local VMware Workstation with ssh is the same as connecting to any Linux residing in ESX host but simpler since everything is local.


Within the VMware Workstation, locate the IP address








Type the ip into putty. The IP usually start with 192.168.x.x as it is local.















That's it.

Cloudera Hadoop namenode refused to stay up.

Installed Hadoop from Cloudera into Oracle Virtualbox. I have the same deployment in Mac and other windows boxes and never had an issue. But this one did.

[cloudera@quickstart ~]$ hadoop fs -ls
ls: Call From quickstart.cloudera/10.0.2.15 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[cloudera@quickstart ~]$ service hadoop-hdfs-namenode status
Hadoop namenode is dead and pid file exists                [FAILED]
[cloudera@quickstart ~]$ service  status
status: unrecognized service
[cloudera@quickstart ~]$ uname -a
Linux quickstart.cloudera 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[cloudera@quickstart ~]$ service hadoop-hdfs-quickstart.cloudera status
hadoop-hdfs-quickstart.cloudera: unrecognized service
[cloudera@quickstart ~]$ service hadoop-hdfs-namemode status
hadoop-hdfs-namemode: unrecognized service
[cloudera@quickstart ~]$ service hadoop-hdfs-namenode status
Hadoop namenode is dead and pid file exists                [FAILED]
[cloudera@quickstart ~]$ service hadoop-hdfs-quickstart.cloudera status
hadoop-hdfs-quickstart.cloudera: unrecognized service
[cloudera@quickstart ~]$ service hadoop-hdfs-namenode start
Error: root user required

[cloudera@quickstart ~]$ sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out
Started Hadoop namenode:                                   [  OK  ]
[cloudera@quickstart ~]$ 




Not really sure what's going on. ..as soon as I started up namenode, it goes back down in less than 30 seconds.  This was fixed when I redeployed. I have the same installation on other more powerful machines and all worked just fine. I suspect this issue has something to with low spec from the machine I was using.

Monday, November 16, 2015

vCenter 6.0 upgrade failed.

In some cases, upgrading vCenter 5.5 to 6.0 could encounter the following errors during pre-upgrade check. That is due to artifacts been left behind since vCenter 2.5 and got carried over from version to version.


[HY000](20000) [Oracle][ODBC][Ora]ORA-20000: ERROR ! Missing constraints:
 VPX_DEVICE.VPX_DEVICE_P1,VPX_DATASTORE.FK_VPX_DS_DC_REF_VPX_ENT;
ORA-06512: at line 260

Create the constraints. Use at your own risk!

alter table VPX_DEVICE add constraint VPX_DEVICE_P1 primary key (DEVICE_ID) using index VPX_DEVICE_P1;
alter table VPX_DATASTORE rename constraint FK_VPX_DS_REF_VPX_ENTITY to FK_VPX_DS_DC_REF_VPX_ENT
alter table VPX_DATASTORE rename constraint FK_VPX_DS_REF_VPX_ENTI to FK_VPX_DS_REF_VPX_ENTITY


Another variant of issue is, the unique index do not even existed. Once can simply try to drop and recreate it.

drop index VPX_DEVICE_P1;
create unique index VPX_DEVICE_P1 ON VPX_DEVICE(DEVICE_ID);
alter table VPX_DEVICE add constraint VPX_DEVICE_P1 primary key (DEVICE_ID) using index VPX_DEVICE_P1;
alter table VPX_DATASTORE rename constraint FK_VPX_DS_REF_VPX_ENTITY to FK_VPX_DS_DC_REF_VPX_ENT
alter table VPX_DATASTORE rename constraint FK_VPX_DS_REF_VPX_ENTI to FK_VPX_DS_REF_VPX_ENTITY


Tuesday, July 7, 2015

Oracle listener no longer working after migrated to different pool

Common Oracle Listener issues when deploying in vCloud especially when the Vapp moved to another location or gone through IP and Host changes.


Scenario

Vapp - Cloud2 resides in another cloud farm. It was shutdown and migrated to another cloud farm. The IP and Vapp name was forced to change. Obviously, this will impact the /etc/hosts, listener.ora and tnsnames.ora as well. 

After migrated, starting up the lsnrctl will fail. Using netca or changing the listener.ora and /etc/hosts will likely not going to help.

One of the reason is, the listener is still going to be looking a the host_id.

in the listener.xml file 

/home/oracle/app/oracle/diag/tnslsnr/Cloud2-001/listener/alert/log.xml



<msg time='2015-07-06T13:26:52.010-06:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='Cloud2-001'
host_addr='UNKNOWN'>
<txt>TNS-12545: Connect failed because target host or object does not exist
TNS-12560: TNS:protocol adapter error
  TNS-00515: Connect failed because target host or object does not exist
   Linux Error: 99: Cannot assign requested address
</txt>
</msg>

Resolution






Changing the Computer Name to whatever the new /etc/hosts and listener.ora host name is .. will fix the issue. Once the Computer Name is changed, boot up the Vapp and start up the lsnrctl.






In short, the Computer Name is going to impact the Oracle listener.

Wednesday, January 21, 2015

Windows/Vmware: The disk is offline because of policy by an administrator

I presented the vapp to cloud but the disks never showed up. Upon checking on device manager, it states "The disk is offline because of policy by an administrator". I was looking up and down at the windows policy and nothing stated that the my vapp inherited this policy from the template. When I checked the Disk management, I realized they are all there just never came online on first boot up.
Missing all the disks
Some silly warning.
Bring them all online, initialize, and formatting


After a few reboots, the disks will remain intact. Perhaps, this is a problem for Windows presented in vmdk ? I have plenty of linux vapps that did show this problem.