80% of the performance depends on the application and on the data model.
20% depends on the hardware, oracle version, database parameters and on SQL tuning (execution plan, indexes etc.)
Application
Here's an example for a poor designed application: A page shows a list with columns. And for every row, a separate SQL-statement is sent to the database. Here, SQL tuning will not help. You have to change the application.
Data Model
The same with the data model. Normalization is good, but normally, you have to stop at a certain point. Otherwise, you'll get performance problems. For many years, I worked for an application with a good designed datamodel, but there was a general fault. To the designer, it seemed a good idea: Every table had columns like ID, SHORTNAME (German: Kürzel), TENANT (German: Mandant/"Mutter"), CREATED_DATE, CREATED_BY etc. He created a "central" table called SVENTITAET and put these columns into it. He added a column TABLE so that everyone knew where the rest of the data is.
If you search for search for a row with a certain shortname, you only have to search in SVENTITAET, you don't need to think about where the data is.
But: For nearly all queries, you'll need data from this table. So you have to join every table with this table. This makes your SQL complex and slow. Also for every DML (=data manipulation language) operation, you have to do something with SVENTITAET. SVENTITAET definitely was a hot spot in the database.
So don't overnormalize your data model. Put these columns into every table, not into one table. And if you want to search for a shortname without thinking about which table, then generate a view or if it is to slow, a materialized view.
No comments:
Post a Comment