Monthly Archives: July 2014

Dynamic SQL

I must confess I never heard of Dynamic SQL. But suddenly around me everyone started talking on Dynamic SQL. What is dynamic SQL?
Dynamic SQL is a SQL statement that is created at runtime. Such is the definition.
I realised that I had already created quite some dynamic SQL without realising that such construction is Dynamic SQL. As an example, I have created a data warehouse that is fed via SQL statements. A number of these statements are created at runtime only. As an example, one may think of a construction like:

set $targettable = XX

insert into $targettable
  select * from source

In the example above, the SQL that is fired will be “insert into XX select * from source”. However if $targettable is set at YY, the generated SQL will be “insert into YY select * from source”.

Hence dynamic SQL allows you to write one skeleton SQL statement that can be used in different circumstances. This allows you to write smaller scripts that are easier to maintain.

Another situation where dynamic SQL can be used, is the handling of user input.

Suppose one has input fields where a user may insert some data. Such data are then stored in a variable that is subsequently inserted into the database. To accomplish this, the user input is captured in a variable that can be subsequently inserted into the database. Something like:

set $input = "data from input form"

insert into targettable
  inputfield ($input)

Around me, some pushback on dynamic SQL, could be heard. The main problem is that dynamic SQL might be difficult to maintain as one may not know what SQL is exactly generated. I am inclined to see that as sloppy programming. A good programmer can always capture the generated SQL and show this to the user. This could be handy when the script must be debugged.
Another issue was that the statements that generate the SQL, can be quite difficult. This can certainly the case. I have seen scripts that were needless complicated to generate a SQL statement. However, good programming techniques should prevent this.

MOLAP and ROLAP

I currently work in an organisation that has a debate on whether to use MOLAP or ROLAP. But first of all: what is discussed here?
ROLAP and MOLAP are two different techniques to store data that are meant for OLAP analysis. In ROLAP the data are stored in tables in a relational database and each data search in OLAP is translated into a SQL query that is fired upon the relational database. In MOLAP, the data are stored in a proprietary structure that is fully optimised to return data from OLAP investigations.
Both system have their advantages. The big advantage of ROLAP is that one takes full advantage of an existing DBMS. The data are stored in a DBMS. Any SQL query may then use the computing power of a DBMS without the need of installing yet another programme. It is also true that in a modern data warehouse environment with an existing data warehouse, one may directly use the tables that reside in a data warehouse. Hence a ROLAP doesn’t need to copy the data in a proprietary structure to allow retrieving OLAP outcomes.
The big advantage of MOLAP is that a separate proprietary data structure is created that geared towards fast returning OLAP outcomes from an OLAP question. To do so, the data structure needs to anticipate on every possible OLAP question. This implies that with an increasing number of different outcomes, the proprietary data structure increases in size.
This shows that one may have a case to advocate MOLAP if:
– the number of different outcomes from OLAP is not too big (hence the proprietary data structure is not too big in size)

In a ROLAP structure, each OLAP question is translated into a SQL query. If the same OLAP is raised multiple times, multiple SQL queries will be fired. As these SQL queries are identical, it might be that the DBMS undertakes the same actions multiple times (assuming results are not cached). In a MOLAP structure, the possible OLAP outcomes are pregenerated in a proprietary data structure. Hence, complex calculations are not only doable, they return quickly, once they are stored in the MOLAP structure, as the are pregenerated. An update can be done after a fresh data load in the data warehouse. Once the data structure is ready, each OLAP question can be answered from that data structure. If the same OLAP question is raised, the same data are retrieved from that proprietary datastructure.
This shows that one has a case to advocate MOLAP if:
– the queries in the database are complicated; examples of such complex queries are

  • queries with a complex CASE statement,
  • CASE in the group by,
  • queries that generate running totals,
  • queries that process large chunks of data
  • queries with complex joins
  • etc.

– the same OLAP question is repeated multiple times
– the data are refreshed at large time intervals

One uses an OLAP structure to undertake an analysis that summarises data and, if required, allocates data into lower levels of detail. I realise that an OLAP structure is also used to create a framework to retrieve a series of reports. Within these reports, no summarisations or allocations are done. Hence, one may say that only a part of the OLAP functionality is used. That also means that the proprietary data structure that is created in MOLAP is only partly used if no summarisations are undertaken. In that case a MOLAP might not be the best solution.
This shows that one has a case to advocate MOLAP if:
– one undertakes many OLAP movements, like summarisation, consolidation and detailing.

The case for ROLAP can be made if we have:

  • ad hoc reports or infrequently used reports,
  • near real time reports,
  • high cardinal dimension tables
  • a request for non-summary reports
  • reports that are only require a simple SQL that returns results quickly

 

Duplicate records in teradata tables

Teradata offers the user the choice whether of not a check is made on duplicate records. Let’s first look at some code that allows duplicate records to be inserted.
The code below has two elements that enables duplicate records:

  1. it contains a primary index that is not unique.
  2. the table is a multiset table.
CREATE MULTISET  TABLE SAN_D_FAAPOC_01.EMPLOYEE
(
EMP_NUM INTEGER NOT NULL,
EMP_NAME CHAR(30) NOT NULL,
DEPT_NO INTEGER NOT NULL
)
 PRIMARY  INDEX(EMP_NUM);
 
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123456,'VINAY',101);
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123457,'SACHIN',102);
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123457,'SACHIN',102);

select * from SAN_D_FAAPOC_01.EMPLOYEE;

Two identical records can be inserted.
The table is a so-called multiset table. This type table allows duplicate records. Note that this is not a Teradata standard. The standard is a table that doesn’t have duplicate records. The standard would be: “CREATE SET TABLE …”. Such a table doesn’t allow duplicate records.

CREATE SET  TABLE SAN_D_FAAPOC_01.EMPLOYEE
(
EMP_NUM INTEGER NOT NULL,
EMP_NAME CHAR(30) NOT NULL,
DEPT_NO INTEGER NOT NULL
)
 PRIMARY  INDEX(EMP_NUM);
 
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123456,'VINAY',101);
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123457,'SACHIN',102);
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123457,'SACHIN',102);

select * from SAN_D_FAAPOC_01.EMPLOYEE;

This set of inserts generates an error as an attempt is made to insert two identical records. However it takes time to do such a check. Hence in a data warehouse environment, such a check with its accompanying overhead costs, might be too expensive. This explains why a multiset is preferred above the standard Teradata standard.
Finally, in most cases the primary index is set to being unique. As the attribute that is used in the primary key in unique, the whole record is always unique as well. In that case an additional check whether the record is unique is not performed. Hence this definition will lead to unique records as the primary index is unique. Again, this uniqueness is enforced without an explicit test on the records.

CREATE MULTISET  TABLE SAN_D_FAAPOC_01.EMPLOYEE
(
EMP_NUM INTEGER NOT NULL,
EMP_NAME CHAR(30) NOT NULL,
DEPT_NO INTEGER NOT NULL
--,CONSTRAINT FOREIGN_EMP_DEPT FOREIGN KEY ( DEPT_NO)  REFERENCES  WITH NO CHECK OPTION DEPARTMENT(DEPT_NO)   
)
UNIQUE  PRIMARY  INDEX(EMP_NUM);

Hence, we have two rules that may enforce unicity of records:

  • using the standard set option that prevent duplicate inserting duplicate rows
  • using a unique primary index conditions that prevents usage of duplicate values for attributes that form the primary index.

It is easier to check the unicity of a primary index as a primary index is only defined on a part of the record. Hence the combination multiset / non unique primary index is most easy to check. In that case, nothing needs to be checked on duplicates. On the other hand, set/ non unique is most difficult to check as the whole record must be checked.

The Teradata answer on materialised views

Teradata has a feature that is designed to increase the performance of queries. This feature is called the “join index”. Such a join index is a structure that stores the outcomes from a query. These outcomes are stored permanently and they wait for the moment when they are called.
The syntax of such a join index is straightforward:

CREATE JOIN INDEX SAN_D_FAAPOC_01.EMP_DEPT
AS
SELECT
DEPT.DEPT_NO,
DEPT.DEPT_NAME,
EMP.EMP_NUM,
EMP.EMP_NAME
FROM
SAN_D_FAAPOC_01.EMPLOYEE EMP
INNER JOIN   
SAN_D_FAAPOC_01.DEPARTMENT DEPT
ON
DEPT.DEPT_NO = EMP.DEPT_NO
PRIMARY INDEX(EMP_NUM); 

The syntax contains the definition of a query. These outcomes are permanently stored in the database as an object that is called here as “EMP_DEPT”.

If a record gets added to one of the base tables that are used in the join index, the join index gets updated to reflect the new situation. This is the downside of the join index: each update in base table is automatically followed by an update of the join index. This involves extra overhead processing time.
Hence the creation of a join index must be weighted against the costs of a continuous update of the join index as result from changes in the base tables.

It can be decided to create a permanent table that contains the query outcomes. Such a table can be updated at fixed time intervals (in stead of an automatic update as with the join index).

Two final remark.
One. One can not address a join index directly. A statement like “select * from join_index” will only return an error code. One only uses such a join index indirectly. It is used if a query is written that looks similar to the join index. In that case, the optimiser decides that join index reduces the time to return the outcomes.
Two. One might know that a join index is used when the explain plan is read. This explain plan is show if the query is preceded by the keyword “explain”. In that case a well-written text shows how the optimiser works. One may then see whether the join index is used or not.

Soft RI in Teradata

Teradata has the concept of “Soft RI”. In this concept, a foreign key is created but its restriction is not enforced.
What happens in that situation?

Let’s look at the normal situation of referential integrity. Suppose, we have two tables. One table is referred to by a second table. If a foreign key is created, we have a limitation on which records we may insert into the table under normal referential integrity. We cannot insert records that have a foreign key for which no relational record exist in the referring table. That is the normal foreign integrity constraint.
Example: we have an “employee” table that has a foreign key “dept_no” that refers to a record in a “department” table. Suppose the department table has records for department 101, 102 and 103. If a foreign key is created, one cannot add records in the employee table that refer to department 104, as this doesn’t exist in the department table.

To enforce this referential integrity, each insert must be followed by a check as to whether the inserted record complies to the referring table. This check costs time. Hence an insert incurs additional processing time.

In a data warehouse environment, this overhead may be prohibitive. Moreover it might not be necessary as we retrieve the records from a source that has already enforced the referential integrity.

In that situation, we may apply “Soft RI”. In that case a foreign key relationship is created but its referential integrity is not enforced during loading.
In that case, we avoid the costs of the check of referential integrity. This leads to more records that can be loaded in a given time frame.

DROP TABLE SAN_D_FAAPOC_01.EMPLOYEE;
DROP TABLE SAN_D_FAAPOC_01.DEPARTMENT;


CREATE SET TABLE SAN_D_FAAPOC_01.DEPARTMENT ,NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      DEPT_NO INTEGER NOT NULL,
      DEPT_NAME VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC,
      DEPT_LOC VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC)
UNIQUE PRIMARY INDEX ( DEPT_NO );



INSERT INTO SAN_D_FAAPOC_01.DEPARTMENT VALUES (101,'SALES','MUMBAI');
INSERT INTO SAN_D_FAAPOC_01.DEPARTMENT VALUES (102,'ACCOUNTS','MUMBAI');
INSERT INTO SAN_D_FAAPOC_01.DEPARTMENT VALUES (103,'HUMAN RESOURCES','MUMBAI');


CREATE  TABLE SAN_D_FAAPOC_01.EMPLOYEE
(
EMP_NUM INTEGER NOT NULL,
EMP_NAME CHAR(30) NOT NULL,
DEPT_NO INTEGER NOT NULL
,CONSTRAINT FOREIGN_EMP_DEPT FOREIGN KEY ( DEPT_NO)  REFERENCES  WITH NO CHECK OPTION DEPARTMENT(DEPT_NO)   
)
UNIQUE PRIMARY INDEX(EMP_NUM);


INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123456,'VINAY',101);
INSERT INTO SAN_D_FAAPOC_01.EMPLOYEE VALUES (123457,'SACHIN',104);

SEL
DEPT.DEPT_NO,
EMP.EMP_NUM,
EMP.EMP_NAME
FROM
SAN_D_FAAPOC_01.EMPLOYEE EMP
INNER JOIN   ----> HERE INNER JOIN DOES SOFT REFERENTIAL INTEGRITY & PICKS ONLY MATCHING COLUMNS
SAN_D_FAAPOC_01.DEPARTMENT DEPT
ON
DEPT.DEPT_NO = EMP.DEPT_NO;