IBM is hiring for Data Engineer-Data Integration | Apply Now!

Join Telegram Channel!

IBM is hiring for the position of Data Engineer-Data Integration in Pune, India. Candidates with a Bachelor’s/ Master’s Degree are eligible to apply for this position. The complete information, eligibility criteria, and requirements are provided below.

Job Description:

Company NameIBM
PositionData Engineer-Data Integration
QualificationBachelor’s/ Master’s Degree
BatchRecent Batches
ExperienceEntry Level
LocationPune, India

Key Responsibilities:

  • Implement and validate predictive models, and develop and maintain statistical models focused on big data, using various statistical and machine learning techniques.
  • Design and deploy enterprise search applications such as Elasticsearch and Splunk based on specific client requirements.
  • Collaborate in an Agile environment with cross-functional teams including data scientists, engineers, consultants, and database administrators to apply analytical rigor and statistical methods to behavioral prediction challenges.
  • Develop efficient and reusable data pipelines or write programs to cleanse and integrate data for modeling purposes.
  • Build predictive and prescriptive models and assess their effectiveness through thorough evaluation of results.

Eligibility Criteria:

  • Bachelor’s/ Master’s Degree

Required Technical and Professional Expertise:

  • Proven expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization.
  • Strong experience in building data ingestion and transformation pipelines using Talend to process both structured and unstructured data from diverse sources.
  • Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake features.
  • Hands-on experience with dimensional and relational data modeling techniques to meet analytics and reporting needs.

Preferred Technical and Professional Experience:

  • Understanding of how to optimize Snowflake workloads through techniques such as clustering keys, caching strategies, and query profiling.
  • Ability to implement effective data validation, cleansing, and governance frameworks within ETL processes.
  • Proficiency in SQL and/or Shell scripting for custom data transformations and automation tasks.

About Company:

At IBM, we do more than work. We create. We create as technologists, developers, and engineers. We create with our partners. We create with our competitors. If you’re searching for ways to make the world work better through technology and infrastructure, software and consulting, then we want to work with you. We’re here to help every creator turn their “what if” into what is. Let’s create something that will change everything.

How To Apply?

  • First, read through all of the job details on this page.
  • Scroll down and press the Click Here button.
  • To be redirected to the official website, click on the apply link.
  • Fill the details with the information provided.
  • Before submitting the application, cross-check the information you’ve provided.

Apply Link: Click Here

Join our Telegram Group: Click Here

error: Content is protected !!