Oracle db_files parameter complete reference for the new oracle dba career




Oracle db_files parameter complete reference for the new oracle dba career

find here important information about the db_files parameter and how you can increase this value and what you should do before increasing the parameter etc.

You are using 10gR2, so no issue if there is more datafiles than value of MAXDATAFILES, it will work and auto add one row in the v$controlfile_record_section view; as example shown by Laurent in the below link:

http://laurentschneider.com/wordpress/2006/01/change-maxdatafiles-malogfiles-maxinstances-maxlogmembers-without-downtime.html

Now what does Maxdatafile, db_files parameter says, just search in the relevant docs and believe upon it only.

So, if value of DB_FILES is too low, we cannot add datafiles beyond the DB_FILES limit, so we have to shutdown+startup database for changing DB_FILES value.
if value of DB_FILES is too high, memory is unnecessarily consumed.

Consider the Performance Impact

The number of datafiles contained in a tablespace, and ultimately the database, can have an impact upon performance.

Oracle Database allows more datafiles in the database than the operating system defined limit. The database DBWn processes can open all online datafiles. Oracle Database is capable of treating open file descriptors as a cache, automatically closing files when the number of open file descriptors reaches the operating system-defined limit. This can have a negative performance impact. When possible, adjust the operating system limit on open file descriptors so that it is larger than the number of online datafiles in the database.

  1. Doc ID: Note:331067.1
    Subject: Maxdatafiles v/s Db_files. Add more data files than value of Maxdatafiles ?
    Type: HOWTO
    Creation Date: 12-AUG-2005
    Last Revision Date: 29-AUG-2005

Applies to:
Oracle Server – Enterprise Edition – Version: 8.1.7.0 to 10.2.0.0
Information in this document applies to any platform.

Goal
This document discuss the behaviour of control file setting MAXDATAFILES and the database parameter DB_FILES in Oracle versions 8i and above.

Prior to 8i, MAXDATAFILES was considered as the hard limit and DB_FILES was considered as the soft limit
and DB_FILES parameter value is always recommended to be below the MAXDATAFILES setting.

Whether we can add more datafiles to the database than the value of “MAXDATAFILES” ?
Do we need to recreate the control file to have a higher value for “MAXDATAFILES” ?

Solution
In Oracle8i and higher, if the MAXDATAFILES limit is reached the controlfile will expand automatically.

In Oracle 8i and higher, when you issue CREATE DATABASE or CREATE CONTROLFILE statements, the MAXDATAFILES parameter specifies an initial size of the datafile portion of the control file. Later, if you add a file whose number exceeds MAXDATAFILES but is less than or equal to the value specified by the DB_FILES initialization parameter, the control file automatically expands to allow the datafile portion to accommodate more files.

We can have DB_FILES set to more than the value of MAXDATAFILES.

However note that these DB_FILES parameter value should be within the OS kernel limits.
Refer to Note 144638.1 – Relationship Between Common Init.ora Parameters and Unix Kernel Parameters

If you are adding datafiles to the database and is within the DB_FILES limit, you will get an error only if control file is unable to allocate more space.

Recreating the control file is not required to increase the MAXDATAFILES parameter.

It is not good to have a high DB_FILES parameter value ( much higher than required ). Increasing the value of DB_FILES increases the size of the PGA, or Program Global Area, which is allocated for every user process connected to ORACLE.

References
Note 144638.1 – Relationship Between Common Init.ora Parameters and Unix Kernel Parameters

You can change maxdatafiles by

1. Shutdown database; Backup database
2. Start up database
3. From sqlplus as sysdba, type: alter database backup controlfile to trace;
4. Type: shutdown immediate:
5. Go to the operating system and go to the USER_DUMP_DEST directory
6. Find the newest trace file
7. Edit the trace file and change MAXDATAFILES to the new value. You will also need to delete all of the lines prior to the line that begins: STARTUP NOMOUNT. See sample text below (the example is for a database using ARCHIVELOG): 
 
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORACLE" RESETLOGS ARCHIVELOG
MAXLOGFILES 32 MAXLOGMEMBERS 2 
MAXDATAFILES 32 
MAXINSTANCES 16 
MAXLOGHISTORY 1600 
LOGFILE 
   GROUP 1 'D:\ORAWIN95\DATABASE\LOG2ORCL.ORA' SIZE 200K, 
   GROUP 2 'D:\ORAWIN95\DATABASE\LOG1ORCL.ORA' SIZE 200K DATAFILE 'D:\ORAWIN95\DATABASE\SYS1ORCL.ORA', 'D:\ORAWIN95\DATABASE\USR1ORCL.ORA', 'D:\ORAWIN95\DATABASE\RBS1ORCL.ORA', 'D:\ORAWIN95\DATABASE\TMP1ORCL.ORA' ; 
# Database can now be opened normally. ALTER DATABASE OPEN RESETLOGS; 
 
8. From sqlplus as sysdba, run the edited trace file from step 7.
9. Shutdown database and backup database

 

db_files*2 ( twice for equal number of temp files to be opened) + 2* maximum_no_of_log_files_simultaneously_opened + maximum_number_of_controlfiles + safety_margin_for_misc_files (like trace . etc. minimum 32 )

The OS file-opening limit should not be greater than number of files Oracle opens.

That is determined by the following

db_files*2 ( twice for equal number of temp files to be opened) + 2* maximum_no_of_log_files_simultaneously_opened + maximum_number_of_controlfiles + safety_margin_for_misc_files (like trace . etc. minimum 32 )

If this is greater than OS Limit , message is flagged and file descriptors are recycled. In your case the calculated limit would be
4000*2 + 2*8 (assuming) + 8(assuming) + 32 = 8056
Since 8056 > 4096 the message is shown. Try a value for db_files, which is little below than ½ the OS limit .

Yesterday we increased db_files from 200 to 500 and recieved following message in alert log.

Oracle instance running on a system with low open file
descriptor limit. Tune your system to increase this
limit to avoid severe performance degradation.

200*2 +  2*20 +

Our file descriptors is set to 1024. Reducing db_files to 400 did not give this message. Interesting thing is that we increased file descriptors very recently from 64 to 1024. Does anyone have more information on this?

How to kill RAC using the db_files parameter?

Submitted by abatishchev on Fri, 2007-04-06 07:54

Once upon a time Le Petit Chaperon Rouge (LPCR) decided to change db_files from 200 to 300 on a busy cluster of four nodes.

Soon after the change applications started to block each other, timeout and crash (enq: TX – contention, cursor: pin S wait on X, enq: HW – contention). Those applications which were actively inserting into LOB columns – suffered the most.

Many lock holders were waiting for "DFS lock handle". Thus LPCR learned that DFS stands for "Distributed File System".

As soon as she reversed the changes – everything went back to normal.

Moral:

The "DFS lock handle" wait points to problems with the file systems.
Well bred young DBAs should never change the db_files parameter recklessly.
Or they may provide dinner for a wolf.

I just wanted to make people aware that temp files start at the next value after db_files even though the number in v$tempfile doesn’t match what what is in the alert log.

rdbms ipc reply waits indicate the either DBWn’s write batch is too
big, or you are checkpointing too intensively. Make sure that db_files
is not much higher than the number of datafiles that you actually
have. If you are using incremental checkpoints (
db_block_max_dirty_target) don’t be too ambitious

My db_files parameter is set at 1024 but I only have 76 datafiles so I
will begin by dropping this value which should decrease my write batch
size.

"The value is now set with _db_writer_chunk_writes
rather than _db_block_write_batch. The default value appears to be a
platform
specific constant – that is, not derived from any other parameters."

So it looks like I have to explicitly set this value with
"_db_writer_chunk_writes".

As you said, in 8i DBWR write batch size is not dependent on db_files and _db_block_write_batch was
changed to _db_writer_chunk_writes and its actual value can be seen with:
select kvvival from x$kvii where kviitag=’kcbscw’;

Oracle instance running on a system with low open file descriptor limit. Tune your system to increase this limit to avoid severe performance degradation.
———————–
Environment: SunOS 5.8 Generic_108528-24 sun4u sparc SUNW,Ultra-Enterprise-10000 Oracle 9.20
You will get this message when one of two conditions is met, either your server is not configured properly for the large number of datafiles in your database, or you have defined MAXDATAFILES to be a number larger than your server is configured for. Too see how many files your server is configured for, issue the following:

oracle_ora9i% ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     unlimited
nofiles(descriptors) 256
vmemory(kbytes)      unlimited

Notice the "nofiles" line. This is the number of file descriptors your system is configured for. To see the MAXDATAFILES parameter, dump your control file to trace and look at the resulting trace file in USER_DUMP_DEST.

If you decide to decrease MAXDATAFILES, then you’ll have to recreate your control file with a lower number. You’ll want this number to be higher than your number of database datafiles though. Personally, I’d rather do the next option than recreate my control file.

If you decide to reconfigure your kernel, you will have to modify the values for rlim_fd_max (the hard limit) and/or rlim_fd_cur (the soft limit) in /etc/system and reboot your server. Make sure you work with your sys admin on this procedure

as per the metalink fs.file-max > (db_files * 2) + control files + log files + over head. In my case it is 2000 data files and total comes to max: 4500 and only one database in the cluster. See my OS limits are far far above that.

Author: admin