Today, I will discuss about the beeline command line which we use to call the SQL Query through Linux. But what if same thing needs to be called through Shell Scripting.
First of all, we need to call Sql Query through Beeline command line inside shell Scripting using below command.
beeline -u “jdbc:hive2://localhost:10000/default;principal=hive/localhost“ -n “username” -p “password”
–hivevar var1=$col1_hive –hivevar var2=$schema_name –hivevar var3=$table_name –hivevar var4=$col1_value -f sql_script.sql > text.log
Here $col1_hive is the column name of a table. $table_name is the table name. $schema_name is the Schema Name where that table is present. $col1_value is the value of that column. So, In beeline command, you can mention variable using below syntax –hivevar var[i]=value_thru_Shell_Script_variable These hive variables can be used inside .sql file.
Below is an example of sql_script.sql
select col1 from ${var2}.${var3} where ${var1} =’${var4}’ limit 1
It’s very important to understand the use of text.log file. Here, my intention to keep this file in the command is to use the content of log file in shell Scripting.
For example cat text.log | grep -iw $col1_value | wc -l Here , i am searching the $col1_value in that file and along with that counting the same using “wc -l” . If this returns me 1, that means data is present for that value in the table.
Other Example of .sql are : SHOW CREATE TABLE ${var2}.${var3} This will check whether table is present in the particular Schema or not.
describe formatted ${var2}.${var3}
This will tell whether it is present in right hdfs or not. When you run describe formatted command, it will give details of avro.schema.url as well which will tell whether it is mapped to correct path or not.
It can be done using grep command. Below is an example for the same cat text.log | grep avro.schema.url
This command will tell you the path where this object mapped to.