Zeta is hiring for Data Engineer I | Apply Now!

Join Telegram Channel!

Zeta is hiring for the position of Data Engineer I in Bangalore, India. Candidates with a Bachelor’s/ Master’s Degree are eligible to apply for this position. The complete information, eligibility criteria, and requirements are provided below.

Job Description:

Company NameZeta 
PositionData Engineer I
QualificationsBachelor’s/ Master’s Degree
Experience1 – 3 (Years)
LocationBangalore, India

Key Responsibilities:

  • Database Design and Management: Design, implement, and maintain robust database systems while ensuring optimal performance, data integrity, and timely resolution of database issues.
  • ETL (Extract, Transform, Load) Processes: Develop, manage, and optimize ETL workflows to efficiently move and transform data between systems, ensuring the reliability of data pipelines.
  • Data Modeling: Design and update logical and physical data models to accurately represent and organize data structures.
  • Data Warehousing: Build and maintain scalable data warehouses to support the storage, retrieval, and analysis of large datasets.
  • Data Integration: Integrate data from diverse sources, including APIs, internal databases, and external datasets, to enable unified data access and reporting.
  • Data Quality and Governance: Implement data quality checks, enforce data governance standards, and contribute to the development of governance policies and best practices.
  • Scripting and Programming: Automate and streamline data processes using programming languages such as Python, Java, and SQL, including the development of validation scripts and error-handling mechanisms.
  • Version Control: Utilize version control systems like Git to manage and track changes in the data pipeline codebase effectively.
  • Monitoring and Optimization: Establish monitoring frameworks to assess system performance and health, and optimize data processes for scalability, speed, and efficiency.
  • Cloud Platforms: Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-based tools for data storage, processing, and analytics.
  • Security: Implement and maintain data security measures in compliance with organizational and regulatory standards, ensuring the protection of sensitive information.
  • Troubleshooting and Support: Provide technical support for data-related issues, conduct root cause analyses, and implement preventive measures to ensure system reliability.

Eligibility Criteria:

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related engineering field, with 1–3 years of experience in data engineering, BI engineering, or data warehouse development.
  • Proficient in one or more programming languages, preferably Python or Java.
  • Strong expertise in writing complex SQL queries for data extraction and manipulation.
  • Familiarity with data workflow and orchestration tools such as Apache Flink and Airflow.
  • Working knowledge of DBT for data transformation and pipeline development.
  • Experience with container orchestration using Kubernetes.
  • In-depth understanding of Apache Spark architecture and internals, with hands-on experience in large-scale data processing.
  • Experience working with distributed SQL engines such as Amazon Athena or Presto.
  • Proven experience in building and managing ETL data pipelines.
  • Ability to evaluate and select appropriate technologies based on principles of reliability, scalability, and maintainability, avoiding unnecessary complexity or buzzwords.

Skills:

  • Data Modeling and Architecture: Design and implement scalable and efficient data models; develop and maintain conceptual, logical, and physical data architectures.
  • ETL Development: Build, optimize, and maintain ETL pipelines to enable efficient data movement across systems; implement data transformation and cleansing processes to ensure accuracy and integrity.
  • Data Warehouse Management: Contribute to the design, development, and maintenance of data warehouses to support analytical and reporting needs.
  • Data Integration: Collaborate with cross-functional teams to integrate data from multiple sources; design and implement both real-time and batch data integration solutions.
  • Data Quality and Governance: Establish, monitor, and enforce data quality standards in alignment with governance policies and best practices.
  • Performance Tuning: Monitor and optimize database performance for large-scale datasets; identify, troubleshoot, and resolve issues related to data processing, storage, and scalability.

About Company:

Founded in 2015, Zeta is a provider of next-gen credit card processing platform. Zeta’s cloud-native and fully API-enabled stack offers a comprehensive range of capabilities, including processing, issuing, lending, core banking, fraud detection, and loyalty programs. With a strong focus on technology, Zeta has over 1700+ employees and contractors, with more than 70% dedicated to technology roles.

How To Apply?

  • First, read through all of the job details on this page.
  • Scroll down and press the Click Here button.
  • To be redirected to the official website, click on the apply link.
  • Fill the details with the information provided.
  • Before submitting the application, cross-check the information you’ve provided.

Apply Link 1 : Click Here

Apply Link 2: Click Here

Join our WhatsApp Community: Click Here

Join our Telegram Group: Click Here

error: Content is protected !!