Data validation across environments using HASH

Today i will discuss about data validation across environments using HASH function. There are times we need to validate the data of same table in different environments which is very monotonous and time consuming task while doing manually. Its better to automate the process. Here , I am considering the three environments Hive,IBM DB2,Snowflake Database.HIVE:md5(concatenated_columns)SNOWFLAKE:MD5_HEX (concatenated_columns)IBM DB2:HEX(HASH_MD5( concatenated_columns) Here, you need to take care of below things.1. Hexadecimal Output may in uppercase or lowercase, so handle with UPPER or LOWER function across environments.2. You need to take care of nulls…

Count the number of nulls using left join

Today, I will discuss about How to calculate the number of nulls that gets populated with left join or %age of matched data or count of not nulls using left join. CASE 1:Calculate the number of nulls that gets populated from right hand table using left join.Assume , I have two employee table EMP_TBL1 and EMP_TBL2. See the data for the same. EMP_TBL1 EMP_TBL2 below is the code used to get number of nulls that comes from right side table using left join. select count(*)-count(y_empid) from ( select x.emp_id ,y.emp_id…

Internal and External tables in Hive

Today, I will present types of tables that can be created in hive in a different way. I created three tables in hive.create table empdtls (emp_id int ); { This is called Internal or Managed table} create external table empdtls_ext (emp_id int ); { This is External table } create external table empdtls_ext_v2 ( emp_id int ) location ‘/user/demo/hivetesting/’ { This is also called External table} Now, we will check the TABLE_TYPE of all above tables using command DESCRIBE FORMATTED table_name Internal Or Managed Table/user/hive/warehouse/ is the location where all…

Check duplicate record in Hive

Today, I will discuss about ” How to automate the process where in you can check entire row duplicate record in hive”. As I have mentioned in all Automation blogs, I will share the pseudo code.STEP1: In hive , use “desc table_name” , this command will give you column names along with datatype and data length. Store the output of this command in a file , say HIVE_TABLE_DDL.txt STEP2 : Read the file HIVE_TABLE_DDL.txt using “cat” command. cat HIVE_TABLE_DDL.txt | awk ‘{print $1}’ ORS=’,’ | sed ‘s/,$//’ * awk'{print $1}’…

create table using file having variable length for each field in hive

Today, I will discuss about to how to create Hive table using file in which fields length is different for all fields .To put in differently, it is neither fixed length file nor delimited file.Below is the example of file contentDefinition of a fileFIELD1, length 2, value are 10 ,10FIELD2, length 5 , value are 5,125FIELD3, length 6 , value are 3,12FIELD4, length 7, value are ABC,EFFIELD5, length 4, value are 12,21FIELD6, length 10, value are 15,10 In order to create table for such files, use the below syntax for…

Equivalent of rollup in hive

Today, i will discuss about ” equivalent of rollup in hive”. There are scenarios where we need to replace the keyword rollup from the queries as it might not be working as expected. To put it differently, we might need to transform the oracle/sqlserver query to hive where rollup doesn’t give same results which we got in oracle. In an employee table. I have EMP_ID,DEPT_NAME,SAL_MONTH,SALARY columns. Query-1 with rollup :select EMP_ID,DEPT_NAME,SAL_MONTH,sum(SALARY) from EMP group by EMP_ID,DEPT_NAME ,rollup( SAL_MONTH ); Equivalent Query-1 without rollup using group by clause : select EMP_ID,DEPT_NAME,null,sum(SALARY)…

Date functions in Hive

Today, I will share list of queries related to “date” logic in which developers generally face issues during Hive query execution. * Query to get the sysdate select from_unixtime(unix_timestamp(current_timestamp())) * Query to get the yesterday’s date. Here 86400 is the number of seconds in a day(24 Hrs * 60 Minutes(in a hour)* 60 Seconds(in a minute)) select to_date(from_unixtime(unix_timestamp(current_timestamp(),’yyyy-MM-dd’)-86400*1));* Substract the x number of days from the particular date. select to_date(from_unixtime(unix_timestamp(current_timestamp(),’yyyy-MM-dd’)-86400*x)); * Convert the sysdate to YYYY-MM-DD format select to_date(from_unixtime(unix_timestamp(current_timestamp(),’yyyy- MM-dd’)));* Convert the sysdate to YYYYMMDD format select regexp_replace(to_date(from_unixtime(unix_timestamp(current_timestamp (),’yyyy-MM-dd’))),’-‘,”);* Convert…

Run hive queries online

Today, I will discuss about “How to run hive queries online”. Below are the steps for the same.* Go to below link https://demo.gethue.com/hue/accounts/login?next=/ * Enter the username and password as “demo”.* Click on “Sign in” Button. Below Screen will appear. As you can see, many users have created their databases. * In order to create your own database, Click on “+” icon.* Fill Database Name and description . * Click on submit button. The output screen will appear on the top-right section.So, we have database ready and name is “hive_testing”…

Avro Output in Pentaho

Today, I will discuss about Avro Output Component in Pentaho. In my previous blog, I have share my experience about Avro input component where Data Deserialization happens. In this Component, Data Serialization Happens. So, if you have data in a text format, you can convert the same in Avro format as well. As soon as you do this conversion, a Schema file also get generated along the Avro file.This all can be achieved through Avro Output Component in Pentaho I have designed a very simple Transformation wherein we have csv…