Case Study: Reducing Memory Usage with MTS

by Brian Keating
Database Specialists, Inc.

About Database Specialists, Inc.
Database Specialists, Inc. provides remote DBA services or onsite database support for your mission critical Oracle systems. Since 1995, we have been providing Oracle database consulting in Solaris, HP-UX, Linux, AIX, and Windows environments. We are DBAs, speakers, educators, and authors. Our team is continually recognized by Oracle, at national conferences and by leading trade publications. Learn more about our remote DBA, database tuning, and consulting services. Or, call us at 415-344-0500 or 888-648-0500.


Oracle’s “Multi-Threaded Server” feature, or MTS for short, allows a relatively large number of user connections to an Oracle instance to be serviced by a relatively small number of server processes. The alternative is to use “dedicated” connections, in which a separate server process is created for each user connection. The primary benefit of MTS is that it reduces the memory requirements for user connections. The main drawback of MTS, however, is that it has the potential to cause significant performance degradation if implemented in an environment that is not suited for MTS.

This article assumes that the reader has at least a basic knowledge of the MTS architecture—how dispatchers, shared servers, request queues and response queues all interact. The primary focus of this article is to discuss a real-world case of implementing MTS on a production database, and to describe the reduction in memory usage that resulted. Finally, this article discusses environments in which it may or may not be appropriate to deploy MTS.

Memory Requirements for Dedicated and Shared Connections

One of the production Oracle databases I support was originally set up with dedicated connections, and during peak usage periods a shortage of memory was frequently observed. The memory problems became more severe after the database was migrated from 7.3.4 to 8.1.6. Some key characteristics of the server and database are as follows:

Operating system version:HP-UX 11.0
Oracle version:
Physical memory (RAM):1.5 Gb
Virtual memory (swap space):2.5 Gb
Number of concurrent user connections
during peak periods:
About 700

Using dedicated connections, all of the physical memory, and 80% to 100% of the virtual memory would get allocated during peak periods. When 100% virtual memory usage was reached, a variety of problems would arise—such as no new users being able to log in, and occasionally even existing user connections getting terminated by the operating system.

Needless to say, this situation was extremely disruptive to end users. And to make the situation more dire, shortly after migrating the database to 8.1.6 I was informed that in the near future many more users would begin accessing the database—up to 3000 total concurrent users. Since the system had difficulty handling the current load already, it certainly would not be able to support the increased user community.

In an initial effort to resolve the memory shortage issue, I reduced the memory size of the SGA as much as I could and the system administrators reduced the operating system memory requirements. However, this did not make a significant difference in memory utilization.

My next step was to determine the average amount of memory each dedicated connection was using. That would allow me to calculate how much additional memory would be needed to support the pending increase in user connections. Since the only applications running on the server were the operating system and two Oracle instances (the Oracle instance with the 700 users plus a second very small instance), I calculated the average memory requirements for each dedicated connection as follows:

  1. I added up the total amount of memory used by the operating system and by the two SGAs:

    Operating system memory usage:118 Mb
    SGA of larger instance:337 Mb
    SGA of smaller instance:27 Mb
    Total:482 Mb
  2. I subtracted this amount from the total amount of memory (physical and virtual) on the server:

    Total memory on the server:4000 Mb
    Memory used by the OS and instances:482 Mb
    Memory remaining:3518 Mb
  3. I divided the memory remaining by the number of concurrent connections during peak periods, because the user connections must account for all the rest of the server’s memory allocation (and during peak periods, all of the memory got allocated):

    Memory remaining:3518 Mb
    Concurrent connections during peak periods:~700
    Average memory usage per connection:~5 Mb

Although there were two instances running on the server, the second instance was much smaller and typically had only about five concurrent users. The memory usage of this second instance was not really significant in this case study.

From these figures, each dedicated connection appeared to use about 5 Mb of memory. Most of the Oracle documents that I have seen state that dedicated connections will use about 3 to 4 Mb each. I suspect that my figure was higher because the sort_area_size initialization parameter was set relatively high (1 Mb), and because these instances were on a higher version of Oracle (the Oracle documents that I have seen describe 7.3 instances). The cursor space required by the application in question also has a direct impact on connection memory requirements.

Note that I initially tried to calculate the connections’ memory usage from the “session pga memory max” statistic in v$sesstat, but the numbers simply didn’t add up. According to that statistic the maximum PGA memory used was only about 600 Mb during peak periods, and that figure was far lower than what I actually observed at the operating system level.

In any case, the figures I computed here were based on 700 concurrent connections, but in the near future that number was expected to reach 3000 connections. This meant that 2300 more connections would have to be supported by the server, and at 5 Mb per connection, the server would need at least 11.5 Gb of additional memory.

That amount of memory was considered far too costly to purchase, so I did not have much of a choice— I had to work with what I had and that meant running tests with MTS in order to try to reduce the memory requirements of user connections. (Of course we could have simply increased the swap space on the system by 11.5 Gb, but forcing the system to do that much swapping would have undoubtedly resulted in a thrashing situation. In that type of situation, the CPUs have to spend almost all of their time paging and swapping, rather than getting any useful work done.)

With MTS, user connections allocate almost all of their memory from the UGA, which is a part of the SGA. If the large_pool_size initialization parameter is set, then a dedicated area in the SGA will be created for MTS connections. Otherwise those connections will allocate memory in the shared pool. It is generally recommended to create a large pool, so I did. However, whether you use a large pool or not, if there is not enough memory in the SGA to handle your user connections, then users will receive “shared memory allocation” errors when they try to connect. Therefore, when using MTS, you need to calculate ahead of time how much memory your user connections will require.

This is a consequence of using memory in the SGA (which is static) for user connections rather than the PGA (which is dynamic). Note that in Oracle 9i the shared_pool_size parameter is dynamic, meaning you can manually enlarge the shared pool without having to restart the instance if you see that it is running low on space. Of course, you have to catch this before the shared pool actually does run out of space or else shared memory allocation errors will occur. Note that the large_pool_size parameter cannot be adjusted without restarting the instance.

In contrast to dedicated connections, it seems much easier to accurately measure the memory usage of MTS connections by querying the v$sesstat view. The “session uga memory max” statistic does appear to have accurate information about MTS connections’ memory usage. Note, however, that this statistic does not measure the maximum amount of UGA memory that has been allocated since the instance was started. Rather, it measures the maximum amount of memory allocated by the currently connected sessions.

So, if you want to determine the average amount of memory that each connection allocates, you will have to measure the maximum amount of UGA memory used at a given moment and divide that amount by the number of current user sessions. The maximum amount of UGA memory allocated at a given moment can be determined with the following query:

SELECT SUM (value) "MTS max memory allocation"
FROM   v$sesstat ss, v$statname st 
WHERE = 'session uga memory max'
AND    ss.statistic# = st.statistic#;

The number of MTS connections currently in existence can be measured with the following query:

FROM   v$session 
WHERE  server != 'DEDICATED';

The server column in v$session will have a value of either SHARED or NONE if a user is connected through MTS, or DEDICATED if the user is connected through a dedicated server process. Note that the background processes (smon, pmon, etc.) evidently always use dedicated connections.

By running the above queries several times, I was able to determine that the amount of UGA memory used by MTS connections averaged between 160 Kb and 185 Kb per connection—less than 4% of the amount of memory used by dedicated connections. These figures correlate very closely to figures that I have found on MetaLink; the postings that I have seen on MetaLink state that MTS connections usually require between 150 Kb and 200 Kb each.

I then multiplied 185 Kb by 700 connections, and came up with a figure of about 130 Mb as an estimate of how much memory I would need to allocate for the large pool. However, in order to allow lots of padding for unanticipated circumstances, I decided to size the large pool about three times larger than I thought I would need, or 400 Mb.

This database has now been running with MTS for over six weeks, and the MTS connection memory requirements have remained very consistent. I have never seen the maximum UGA memory allocation exceed 140 Mb for 700 concurrent connections.

After allocating 400 Mb for the large pool for the larger instance, the total amount of memory used by the operating system and the two instances on the database server came to about 882 Mb. As mentioned above, MTS connections allocate almost all of their memory in the SGA, but they do allocate a small amount of stack space memory in the PGA. The stack space memory usage is negligible, though—only about 2 Kb per connection.

Since implementing MTS on the larger instance, the total amount of memory allocated on the database server during peak periods is about 900 Mb—in contrast to about 4000 Mb with dedicated connections. To put it another way, the total memory allocation on the system while using MTS is about 60% of the physical memory and 0% percent of the virtual memory. With MTS, the database server will now be in a much better position to handle the estimated 3000 concurrent users in the near future.

Environments in Which to Consider Deploying MTS

As mentioned in the introduction, the main drawback of MTS is that in certain environments it has the potential to cause serious performance degradation. In the few MTS-enabled databases that I have worked with, I have only seen one case where MTS caused any significant performance problems. We’ll discuss that situation shortly.

There are a number of postings on MetaLink and references in the Oracle documentation that describe environments in which MTS has caused problems. Basically, MTS is typically only appropriate in environments that meet the following two criteria:

  1. The application has a high “think time.” That is, users spend a relatively small amount of time running SQL statements, compared to the amount of time that they spend working with data that they have already retrieved. If that is not the case—if the application is designed such that sessions are constantly running SQL statements—then the shared server processes of MTS will have to constantly switch between processing the various users’ requests. This can cause performance degradation due to the overhead involved in switches and the amount of time that requests may wait in the queue before a shared server becomes available.

  2. The result sets that users retrieve are generally rather small. If a user retrieves a very large result set (i.e. tens of thousands of rows), then that user’s dispatcher can become overwhelmed with that particular user’s request—at the expense of all of the other users connected through the same dispatcher.

There is an additional recommendation that I would make in reference to using MTS, although I have never seen this recommendation on MetaLink or in any Oracle documentation: If there is enough physical memory on the server so that all of the concurrent processes (operating system, Oracle, user connections, applications, etc.) during peak periods can fit into physical memory, then don’t bother implementing MTS. There are two reasons for this:

  1. As long as you do not use more than 100% of the physical memory on a server, the operating system will not have to page or swap. If that’s the case, there will not be any pressing need to conserve memory. In other words, reducing a system’s memory usage from 80% of physical memory to 20% of physical memory will not really buy you anything—it will simply mean that more of your physical memory will be idle most of the time. So, unless you have at least some virtual memory usage, I would not recommend going through the additional complexity of MTS.

  2. More importantly, MTS appears to cause some additional CPU overhead as compared to running an equivalent number of dedicated connections. Therefore, if a system is CPU-bound, then MTS will probably not be a good option for that system.

I encountered one situation in which a database server’s CPUs were constantly pegged at 100% usage, and the CPU queue length (the number of processes waiting for CPU time) was typically 6 or 7 during peak periods. That database had been using MTS for several years, even though there was more than enough physical memory on the system to support dedicated connections. So, in an attempt to reduce the system’s CPU usage, I disabled MTS. The average CPU queue length during peak periods went down to about 4.

Moral of the story: If a system is CPU-bound and that system has enough RAM to fit all of its processes into physical memory, then MTS should probably not be used on that system.


The Oracle Multi-Threaded Server feature can dramatically reduce the memory requirements of user connections, at the expense of some additional complexity and at the risk of causing performance degradation. Basically, one might want to consider using MTS if a server is having significant memory problems (i.e., a very high degree of paging/swapping) and if the application is designed in such a way to make sharing of user connections feasible. In general, OLTP-type applications can be suitable for MTS, while batch, data warehouse, and decision support applications are usually not appropriate for MTS.

About the Author

Brian Keating has been an Oracle DBA and Unix system administrator for over six years. He is currently a consultant with Database Specialists, Inc., a consulting group specializing in business solutions based on Oracle technology. You can visit Database Specialists on the web at, and you can contact Brian Keating at




Copyright 2001 Database Specialists, Inc.