'Redshift design or configuration issue? - My Redshift datawarehouse seems much slower than my mysql database

I have a Redshift datawarehouse that is pulling data in from multiple sources. One is my from MySQL and the others are some cloud based databases that get pulled in.

When querying in redshift, the query response is significantly slower than the same mysql table(s).

Here is an example: SELECT * FROM leads WHERE id = 10162064

In mysql this takes .4 seconds. In Redshift it takes 4.4 seconds.

The table has 11 million rows. "id" is indexed in mysql and in redshift it is not since it is a columnar system.

I know that Redshift is a columnar data warehouse (which is relatively new to me) and Mysql is a relational database that is able to utilize indexes. I'm not sure if Redshift is the right tool for us for reporting, or if we need something else. We have about 200 tables in it from 5 different systems and it is currently at 90 GB.

We have a reporting tool sitting on top that does native queries to pull data. They are pretty slow but are also pulling a ton of data from multiple tables. I would expect some slowness with these, but with a simple statement like above, I would expect it to be quicker.

I've tried some different DIST and SORT key configurations but see no real improvement.

I've run vacuum and analyze with no improvement.

We have 4 nodes, dc2.large. Currently only using 14% storage. CPU utilization is frequently near 100%. Database connections averages about 10 at any given time.

The datawarehouse just has exact copies of the tables from our integration with the other sources. We are trying to do near real-time reporting with this.

Just looking for advice on how to improve performance of our redshift via configuration changes, some sort of view or dim table architecture, or any other tips to help me get the most out of redshift.



Solution 1:[1]

I've worked with clients on this type of issue many times and I'm happy to help but this may take some back and forth to narrow in on what is happening.

First I'm assuming that "leads" is a normal table, not a view and not an external table. Please correct if this assumption isn't right.

Next I'm assuming that this table isn't very wide and that "select *" isn't contributing greatly to the speed concern. Yes?

Next question is wide this size of cluster for a table of only 11M rows? I'd guess it is that there are other much larger data sets on the database and that this table isn't setting the size.

The first step of narrowing this down is to go onto the AWS console for Redshift and find the query in question. Look at the actual execution statistics and see where the query is spending its time. I'd guess it will be in loading (scanning) the table but you never know.

You also should look at STL_WLM_QUERY for the query in question and see how much wait time there was with the running of this query. Queueing can take time and if you have interactive queries that need faster response times then some WLM configuration may be needed.

It could also be compile time but given the simplicity of the query this seems unlikely.

My suspicion is that the table is spread too thin around the cluster and there are lots of mostly empty blocks being read but this is just based on assumptions. Is "id" the distkey or sortkey for this table? Other factors likely in play are cluster load - is the cluster busy when this query runs? WLM is one place that things can interfere but disk IO bandwidth is a share resource and if some other queries are abusing the disks this will make every query's access to disk slow. (Same is true of network bandwidth and leader node workload but these don't seem to be central to your issue at the moment.)

As I mentioned resolving this will likely take some back and forth so leave comments if you have additional information.

Solution 2:[2]

(I am speaking from a knowledge of MySQL, not Redshift.)

SELECT * FROM leads WHERE id = 10162064

If id is indexed, especially if it is a Unique (or Primary) key, 0.4 sec sounds like a long network delay. I would expect 0.004 as a worst-case (with SSDs and `PRIMARY KEY(id)).

(If leads is a VIEW, then let's see the tables. 0.4s may be be reasonable!)

That query works well for a RDBMS, but not for a columnar database. Face it.

I can understand using a columnar database to handle random queries on various columns. See also MariaDB's implementation of "Columnstore" -- that would give you both RDBMS and Columnar in a single package. Still, they are separate enough that you can't really intermix the two technologies.

If you are getting 100% CPU in MySQL, show us the query, its EXPLAIN, and SHOW CREATE TABLE. Often, a better index and/or query formulation can solve that.

For "real time reporting" in a Data Warehouse, building and maintaining Summary Tables is often the answer.

Tell us more about the "exact copy" of the DW data. In some situations, the Summary tables can supplant one copy of the Fact table data.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Bill Weiner
Solution 2