Cloudera Meetup with focus on DBImport

big data Data science ibm gpt chatbot

logo

 

 

 

 

DBImport is an accelerator, used to ingest data into a datalake.

Ingestion of data into Hive in a Hadoop cluster is both easy and hard. The first initial load, usually with sqoop, is fairly easy. Connect to the source system and start to fetch. With the right version of Hive, you can get the table auto created for you with the data. That’s great and simple. But once that first couple of tables are delivered to the users, the usually want more, and they want to load it every day or hour.

Ok, so we put sqoop into a bash script and run it in cron. Problem solved! Well, after a while you will be drowning in those sqoop scripts. Suddenly they stop to work. Some “dude” on the source system change their table definitions and renamed a couple of columns. But didn’t tell anyone as “it’s handled inside their application”. So that morning when the Hadoop team arrives at work, they got a bunch of upset users that don’t have fresh data.

This is where DBImport comes to the rescue. With DBImport running those imports, the scenario above would only be an entry in a log file saying that the source table was changed and everything is still working.

Berry Österlund will tell us all about how he has streamlined data import with this tool in a real world project.

Come to our Meetup, March 4th 17.30, at Saltmätargatan 8a, Stockholm

 

https://github.com/Middlecon/DBImport

More recent from the same category

KONTAKT

Kontakta oss!

Vi ser fram emot att höra från dig och svara på eventuella frågor du kan ha. Vårt team av experter inom datahantering är redo att hjälpa dig att maximera ditt företags potential. Tveka inte att kontakta oss för att ta reda på mer om våra tjänster och hur vi kan hjälpa dig. Fyll i nedan så svarar vi dig så snart vi kan.

Share this post

Twitter
LinkedIn
Facebook