SQL Database Performance Tuning for Developers

 

Database tuning can be an incredibly difficult task, particularly when working with large-scale data where even the most minor change can have a dramatic (positive or negative) impact on performance.

In mid-sized and large companies, most SQL database tuning will be handled by a Database Administrator (DBA). But believe me, there are plenty of developers out there who have to perform DBA-like tasks. Further, in many of the companies I’ve seen that do have DBAs, they often struggle to work well with developers—the positions simply require different modes of problem solving, which can lead to disagreement among coworkers.

When working with large-scale data, even the most minor change can have a dramatic impact on performance.

On top of that, corporate structure can also play a role. Say the DBA team is placed on the 10th floor with all of their databases, while the devs are on the 15th floor, or even in a different building under a completely separate reporting structure—it’s certainly hard to work together smoothly under these conditions.  In this article, I’d like to accomplish two things:

  1. Provide developers with some developer-side database tuning techniques.
  2. Explain how developers and DBAs can work together effectively.

Database Optimization (in the Codebase)

Indexes

If you’re a complete newcomer to databases and even asking yourself “What is SQL tuning?”, you should know that indexing is an effective way to tune your SQL database that is often neglected during development. In basic terms, an index is a data structure that improves the speed of data retrieval operations on a database table by providing rapid random lookups and efficient access of ordered records. This means that once you’ve created an index, you can select or sort your rows faster than before.

Indexes are also used to define a primary-key or unique index which will guarantee that no other columns have the same values. Of course, indexing is a vast an interesting topic to which I can’t do justice with this brief description (here’s a more detailed write-up).

If you’re new to indexes, I recommend using this diagram when structuring your queries: 

This diagram illustrates a few SQL query performance tuning tips every developer should know.

Basically, the goal is to index the major searching and ordering columns.

Note that if your tables are constantly hammered by INSERT, UPDATE, and DELETE, you should be careful when indexing—you could end up decreasing performance as all indexes need to be modified after these operations.

Further, DBAs often drop their indexes before performing million+ row batch inserts to speed up the insertion process. After the batch is inserted, they then recreate the indexes. Remember, however, that dropping indexes will affect every query running in that table; so this approach is only recommended when working with a single, large insertion.

SQL Server Performance Tuning: Execution Plans

By the way: the Execution Plan tool in SQL Server can be useful for creating indexes.

Its main function is to graphically display the data retrieval methods chosen by the SQL Server query optimizer. If you’ve never seen them before, here’s a detailed walkthrough.

To retrieve the execution plan (in SQL Server Management Studio), just click “Include Actual Execution Plan” (CTRL + M) before running your query.

Afterwards, a third tab named “Execution Plan” will appear. You might see a detected missing index. To create it, just right click in the execution plan and choose the “Missing Index Details…”. It’s as simple as that!

This screenshot demonstrates one of the performance tuning techniques for your SQL database.

(Click to zoom)

Tune SQL query performance by avoiding coding loops

Imagine a scenario in which 1000 queries hammer your database in sequence. Something like:

for (int i = 0; i < 1000; i++)
{
    SqlCommand cmd = new SqlCommand("INSERT INTO TBL (A,B,C) VALUES...");
    cmd.ExecuteNonQuery();
}

You should avoid such loops in your code. For example, we could transform the above snippet by using a unique INSERT or UPDATE statement with multiple rows and values:

INSERT INTO TableName (A,B,C) VALUES (1,2,3),(4,5,6),(7,8,9) -- SQL SERVER 2008

INSERT INTO TableName (A,B,C) SELECT 1,2,3 UNION ALL SELECT 4,5,6 -- SQL SERVER 2005

UPDATE TableName SET A = CASE B
        WHEN 1 THEN 'NEW VALUE'
        WHEN 2 THEN 'NEW VALUE 2'
        WHEN 3 THEN 'NEW VALUE 3'
    END
WHERE B in (1,2,3)

Make sure that your WHERE clause avoids updating the stored value if it matches the existing value. Such a trivial optimization can dramatically increase SQL query performance by updating only hundreds of rows instead thousands. For example:

UPDATE TableName
SET A = @VALUE
WHERE
      B = 'YOUR CONDITION'
            AND A <> @VALUE -- VALIDATION

Avoid correlated subqueries

A correlated subquery is one which uses values from the parent query. This kind of query tends to run row-by-row, once for each row returned by the outer query, and thus decreases SQL query performance. New developers are often caught structuring their queries in this way—because it’s usually the easy route.

Here’s an example of a correlated subquery:

SELECT c.Name, 
       c.City,
       (SELECT CompanyName FROM Company WHERE ID = c.CompanyID) AS CompanyName 
FROM Customer c

In particular, the problem is that the inner query (SELECT CompanyName…) is run for each row returned by the outer query (SELECT c.Name…). But why go over the Company again and again for every row processed by the outer query?

A more efficient performance tuning technique would be to refactor the correlated subquery as a join:

SELECT c.Name, 
       c.City, 
       co.CompanyName 
FROM Customer c 
	LEFT JOIN Company co
		ON c.CompanyID = co.CompanyID

In this case, we go over the Company table just once, at the start, and JOIN it with the Customer table. From then on, we can select the values we need (co.CompanyName) more efficiently.

Select sparingly

One of my favorite SQL optimization tips is to avoid SELECT *! Instead, you should individually include the specific columns that you need. Again, this sounds simple, but I see this error all over the place. Consider a table with hundreds of columns and millions of rows—if your application only really needs a few columns, there’s no sense in querying for all the data. It’s a massive waste of resources. (For more issues, see here.)

For example:

SELECT * FROM Employees

vs.

SELECT FirstName, City, Country FROM Employees

If you really need every column, explicitly list every column. This isn’t so much a rule, but rather, a means of preventing future system errors and additional database performance tuning. For example, if you’re using an INSERT... SELECT... and the source table has changed via the addition of a new column, you might run into issues, even if that column isn’t needed by the destination table, e.g.:

INSERT INTO Employees SELECT * FROM OldEmployees

Msg 213, Level 16, State 1, Line 1
Insert Error: Column name or number of supplied values does not match table definition.

To avoid this kind of error, you should declare each column individually:

INSERT INTO Employees (FirstName, City, Country)
SELECT Name, CityName, CountryName
FROM OldEmployees

Note, however, that there are some situations where the use of SELECT * could be appropriate. For example, with temp tables—which leads us to our next topic.

Source : http://www.toptal.com/sql/sql-database-tuning-for-developers

  • Ask Question