37 views

Automate the existence of S3 files through shell scripting

Today, I will discuss about “How to automate the existence of files in S3 bucket through shell script”. Here , i will share pseudo code for the same in the form of steps.
* create one config file which we will have below details with pipe delimiter.
S3 bucket Name|project_folder_name|Relative path of sub folder
Techie-1|ABC|prj1/UC1/
#### Here techie-1 is the bucket Name, the immediate folder to this bucket is ABC and project Use-case folder is prj1/UC1/ ####
* This[config] file will have list of Use-case folder names for which you need to test the file existence in S3 bucket.
* Read this config file line by line and use the below command .
bucketName=echo $line | awk -F "|" '{print $1} ## This will give you bucket Name
Prefix_Path=echo $line | awk -F"|" '{print $2"/"$3} ## This will give you path as ABC/prj1/UC1/

* Then, run the below command to get required details of that folder.

aws s3api list-objects –bucket $bucketName –prefix $prefix_Path

* This will give you output in Json format like below
{
“Contents”: [
{
LastModified“: value,
“ETag”: value,
“StorageClass”: value,
Key“: Value,
“Owner”: Value
{
“DisplayName”: Value ,
“ID”: Value
},
Size“: Value
}
]
}
Here , Key will have exact path till the file name.
LastModified is self understood , it has timestamp when was file last modified
Size tells you the file size in Bytes.

The Json file will have such format for all the files inside the “content” by comma seperated.

Now , you need to check whether file is present or not , For that you have to make use of “Key” here. So, the final command becomes

aws s3api list-objects –bucket $bucketName –prefix $prefix | grep -w Key | awk -F ‘”‘ ‘{print $4}’| awk -F “/” ‘{print $NF}

## This will take you to the leaf node of the Key Value. From here on ,code differs depending on the project requirements.

Related posts