Benefits of Using the Server Result Cache. Oracle Database Concepts for information about the server result cache. The benefits of using the server result cache depend on the application.
OLAP applications can benefit significantly from its use. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.
For example, you can use advanced query rewrite with equivalences to create materialized views that materialize queries in the result cache instead of using tables. Oracle Database Data Warehousing Guide for information about using the result cache and advance query rewrite with equivalences.
When a query executes, the database searches the cache memory to determine whether the result exists in the result cache. If the result exists, then the database retrieves the result from memory instead of executing the query.
If the result is not cached, then the database executes the query, returns the result as output, and stores the result in the result cache. When users execute queries and functions repeatedly, the database retrieves rows from the cache, decreasing response time.
Cached results become invalid when data in dependent database objects is modified. The following sections contains examples of how to retrieve results from the server result cache:.
How Results are Retrieved in a Query. How Results are Retrieved in a View. The following example shows a query of hr. In this example, the results are retrieved directly from the cache, as indicated in step 1 of the execution plan. The value in the Name column is the cache ID of the result. The output of this query might look like the following:. In this example, the summary view results are retrieved directly from the cache, as indicated in step 3 of the execution plan.
This client cache exists for each client process and is shared by all sessions inside the process. Oracle recommends client result caching for queries of read-only or read-mostly tables. The client result cache is distinct from the server result cache, which resides in the SGA. When client result caching is enabled, the query result set can be cached on the client, server, or both. Client caching can be enabled even if the server result cache is disabled.
Benefits of Using the Client Result Cache. NET, support client result caching. Performance benefits of using the client result cache include:. When queries are executed repeatedly, the application retrieves results directly from the client cache memory, resulting in faster query response time. These resources are freed for other tasks, thereby making the server more scalable. The client result cache stores the results of the outermost query, which are the columns defined by the OCI application.
Subqueries and query blocks are not cached. Figure illustrates a client process with a database login session. This client process has one client result cache shared amongst multiple application sessions running in the client process. If the first application session runs a query, then it retrieves rows from the database and caches them in the client result cache.
If other application sessions run the same query, then they also retrieve rows from the client result cache. The client result cache transparently keeps the result set consistent with session state or database changes that affect it. When a transaction changes the data or metadata of database objects used to build the cached result, the database sends an invalidation to the OCI client on its next round trip to the server.
Oracle Call Interface Programmer's Guide for details about the client result cache. This section describes how to configure the server and client result cache and contains the following topics:. Configuring the Server Result Cache. Configuring the Client Result Cache. Setting the Result Cache Mode. Requirements for the Result Cache. By default, on database startup, Oracle Database allocates memory to the server result cache in the shared pool. The memory size allocated depends on the memory size of the shared pool and the selected memory management system:.
The size of the server result cache grows until it reaches the maximum size. Query results larger than the available space in the cache are not cached.
The database employs a Least Recently Used LRU algorithm to age out cached results, but does not otherwise automatically release memory from the server result cache.
This section describes how to configure the server result cache and contains the following topics:. Table lists the database initialization parameters that control the server result cache. Specifies the memory allocated to the server result cache. To disable the server result cache, set this parameter to 0. Specifies the maximum amount of server result cache memory in percent that can be used for a single result. Valid values are between 1 and You can set this parameter at the system or session level.
Specifies the expiration time in minutes for a result in the server result cache that depends on remote database objects. The default value is 0, which specifies that results using remote objects will not be cached.
If a non-zero value is set for this parameter, DML on the remote database does not invalidate the server result cache. In an Oracle Real Application Clusters Oracle RAC environment, the result cache is specific to each database instance and can be sized differently on each instance. Let's write an app that has a cached function which returns a mutable object, and then let's follow up by mutating that object:. No surprises here. But now notice what happens when you rerun you app i.
What's going on here is that Streamlit caches the output res by reference. When you mutated res["output"] outside the cached function you ended up inadvertently modifying the cache. Since this behavior is usually not what you'd expect, Streamlit tries to be helpful and show you a warning, along with some ideas about how to fix your code.
In this specific case, the fix is just to not mutate res["output"] outside the cached function. There was no good reason for us to do that anyway! Check out the section entitled Fixing caching issues for more information on these approaches and more. In caching , you learned about the Streamlit cache, which is accessed with the st. In this article you'll see how Streamlit's caching functionality is implemented, so that you can use it to improve the performance of your Streamlit apps.
For both the key and the output hash, Streamlit uses a specialized hash function that knows how to traverse code, hash special objects, and can have its behavior customized by the user. If an error is encountered an exception is raised. If the error occurs while hashing either the key or the output an UnhashableTypeError error is thrown.
If you run into any issues, see fixing caching issues. As described above, Streamlit's caching functionality relies on hashing to calculate the key for cached objects, and to detect unexpected mutations in the cached result.
Suppose you define a type called FileReference which points to a file in the filesystem:. By default, Streamlit hashes custom classes like FileReference by recursively navigating their structure.
In this case, its hash is the hash of the filename property. As long as the file name doesn't change, the hash will remain constant. However, what if you wanted to have the hasher check for changes to the file's modification time, not just its name?
This is possible with st. While it's possible to write custom hash functions, let's take a look at some of the tools that Python provides out of the box. Here's a list of some hash functions and when it makes sense to use them. Python's id function Example. Python's hash function Example.
Suppose we want to open a database connection that can be reused across multiple runs of a Streamlit app. For this you can make use of the fact that cached objects are stored by reference to automatically initialize and reuse the connection:. With just 3 lines of code, the database connection is created once and stored in the cache.
In other words, it becomes a singleton. These results can be reused by any session calling the same function with the same parameters. This article describes the usage and administration of the function result cache. The following examples show their usage. SLEEP procedure to slow down the function. The first loop takes approximately 10 seconds, 1 second per function call, while the second is almost instantaneous.
0コメント