Skip to main content

Posts

Some Real-time Issues faced during Cassandra Node Extension

  Adding a node for Cassandra isn't as simple as it sounds. There are many factors that should be evaluated before we proceed in this scenario. At Telenet, we encountered some interesting issues. We added 15 nodes for CASSANDRA_CLUST1 and 14 nodes for CASSANDRA_CLUST2 respectively Let's get started and try to understand these issues one by one. Issue -  Cassandra_clust1 1st node could not be added, system.log error as follows. java.nio.file.FileSystemException: <PATH>/bb-3116-bti-Data.db:Too many open files. Root Cause- Hard and soft limits for Cassandra_clust1 nodes were not set as per the Datastax recommendations. Solution- We modified number of open files for CASSANDRA_CLUST1 new nodes first. It was first tested on one node and then re-running ansible job to add new nodes was done successfully for CASSANDRA_CLUST1 nodes. Values from the file /etc/security/limits.conf were - <cassandra-user> hard nofile 100000 <cassandra-user> soft nofile 100000  were modifi
Recent posts

Key Takeaways from the recent hrglobal.drv patch installation for EBS r12.2

 1. Metalink note does not provide explicit step to source patch fs before running below adop command. Please source it before running adop apply so that right hrglobal.drv is picked up from PATCH_BASE and not RUN_BASE 2. Please run the patch with parallel worker process by appending - workers=8, again this is not mentioned in the hrglobal.drv and will speed up the patch apply. 3. There are some pre-requisite patches before applying hrglobal.drv. make sure they are applied. They are actually listed on the lower end of the metalink document but must be applied before running hrglobal.drv patch. 4. I tried running DataInstall with hostname of the db server, but it failed to execute, while the same went through when used IP java oracle.apps.per.DataInstall apps appspwd thin 10.3.x.xxx:<PORT DB>:<DB_SID> 5. Tagging few important metalink notes that you must review before applying hrglobal.drv Doc ID 1469456.1  Datainstall and Hrglobal Application: 12.2 specifics Doc ID 2006776.

EBS Cloud Manager -- A DBA Sailing around Linux Administration, OCI Cloud Shell, OS Firewalls...

  After deploying EBS using cloud manager, we were not able to login to apps and db nodes on OCI as root/opc users. The EBS Cloud manager guide has mentioned only one way to login to apps and db nodes after deployment on OCI i.e. - - Login to Cloud Manager as opc - sudo su - oracle - ssh apps node ip - ssh db node ip We may require connecting to root OS user for some superuser related tasks. In my case we had to check db node port as developers were not able to connect using sqldeveloper after connecting to the VPN (interesting things coming up for this issue later in this blog). So we were in this scenario - 1. Port is somewhere blocked. 2. db node ip is pingable 3. We can only login to the db node using os user - oracle 4. We can't check firewall rules without root access. It all started with setting root password for this db node and we followed below note - Ref -How to Reset Root Password in Oracle Linux 7 (Doc ID 1954652.1) 1. Launch Cloud Shell on OCI for the specific ins

EBS Cloud Manager Troubleshooting - Creating Backups

EBS Cloud manager is well automated for setups on OCI and there are scenarios where DBA intervention would still be required, let's discuss a classic example of such scenario today (17feb2022) Task - Create backup of EBS Cloud Manager Environment, ebs r12.2.9, db version 12.1.0.2 When you login to EBS Cloud Manager, simply check top right section -  Once you click on Create backup, it will ask you for encryption password and apps credentials. Backup is then submitted as a JOB and you can get the details for the running backup under Jobs Tab. Please note that EBS cloud manager creates an OSS level backup on the Object Storage. In my case, the job failed at - Validate ->  EBS cloud backup Application tier validations  Error Details -  ERROR : WLS domain size is higher than EBS default threshold: 5120 MB ). Please check and cleanup some of the server log files or any unnecessary file under /u01/install/APPS/fs1/FMW_Home/user_projects/domains/EBS_domain. Failed with code: 1 [2022/02

Steps to compile fmb in EBS - r12.1.x, r12.2.x

  Take backup of fmx - $ pwd /u01/install/APPS/fs1/EBSapps/appl/inv/12.0.0/forms/US $ cp INVSDOIO.fmx INVSDOIO.fmx17feb2022 go to AU_TOP - cd $AU_TOP/forms/US template -  frmcmp_batch userid=apps/<apps_pwd> module=<form_name>.fmb output_file=<form_name>.fmx module_type=form batch=no compile_all=special Example - frmcmp_batch userid=apps/apps module=INVSDOIO.fmb output_file=INVSDOIO.fmx module_type=form batch=no compile_all=special Check output for any errors on screen Ref -R12: How to Compile a Form in Release 12 and 12.2 (Doc ID 1085928.1)

Special Scenario - Loss of Datafile (non-critical) without any backup and in Noarchivelog mode

 We faced below interesting issue in one of our test environments, I call it SPICE19 here. One of the datafile was deleted with the intent to recreate it. Unfortunately, database was up and running at that moment when the datafile was deleted at OS level. Observations - 1. Database SPICE19 was in noarchivelog mode. 2. No backups were available for restoration. 3. Datafile '/u02/oracle/oradata/SPICE19/users06' was created accidentally with a wrong name (should have been /u02/oracle/oradata/SPICE19/users06.dbf) 4. It was then removed using (rm) at OS level. 5. A new datafile was added with the name - /u02/oracle/oradata/SPICE19/users06.dbf 6. Database got crashed searching for datafile - /u02/oracle/oradata/SP ICE19/users06 7. Database complained for missing datafile - /u02/oracle/oradata/SPICE19/users06 as the information was in controlfile.   Troubleshooting Steps - 1. Missing datafile id was found below - SQL>  select * from v$recover_file;      FILE#

Exploring different use-cases for OCI Object Storage Gateway deployments

This post will cover different approaches to deploy Object Storage Gateway. You can call Object Storage gateway as a bridge that will connect your on-premise environment with Object Storage. It enables File-to-object transparency.  Object Storage buckets are mounted as nfs mount points in your on-prem environment. Substantial information is available on Object storage gateway and links are shared in this post. Let's jump to understanding different approaches to deploy Object Storage Gateway. My observations when implementing below POCs- 1. Object Storage gateway can be deployed either on-prem or on OCI. It can be downloaded for free  here 2. SSDs drives and XFS (Extended File system) for mounting are recommended for storing storage gateway - metadata, cache and logs. 3. OSG does not support Windows operating environment. 4. If installing OSG on-prem, make sure you have proper access control onto storage gateway server and secure it with mfa. 5. If installing OSG on-cloud you can ha