Lingaro is hiring data engineer; here is how you can apply

Telegram Group Join Now
WhatsApp Group Join Now

About Lingaro 

Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data.

Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team.

Lingaro Recruitment

Job Role: Data Engineer

Qualification: Bachelors / Masters Degree

Experience: Fresher

Batch: 2025 / 2024 / 2023 / 2022

Salary: up to ₹14 LPA

Job Location: Bangalore

Last Date: ASAP

Job Notification Join us on Telegram: Click here

Requirements:

  • A bachelor’s or master’s degree in Computer Science, Information Systems, or a related field is typically required.
  • Work commercial experience as a Data Engineer or a similar role.
  • Proficiency in MS Fabric, Azure Data Factory, Azure Synapse Analytics, Azure Databricks.
  • Extensive knowledge of MS Fabric components: Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Data warehouse, Power BI Integration, Semantic Models, Spark Jobs, Notebooks and Realtime Analytics, Dataflow Gen1 and Gen2, KQL.
  • Integrate Fabric capabilities for seamless data flow, governance, and collaboration across teams.
  • Strong understanding of Delta Lake, Parquet, and distributed data systems.
  • Strong programming skills in Python, PySpark, Scala and Spark SQL/TSQL for data transformations.
  • Excellent knowledge on Source control / Version Control along with CICD is a plus.
  • Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL).Proficiency in data integration techniques, ETL processes and data pipeline architectures.
  • Solid understanding of data processing techniques such as batch processing, real-time streaming, and data integration.
  • Proficiency in working with relational and non-relational databases such as MSSQL, MySQL, PostgreSQL or Cassandra. Knowledge of data warehousing concepts and technologies like Redshift, Snowflake, or BigQuery is beneficial.
  • Good knowledge of data storage architectures, including delta lakes, data warehouses, or distributed file systems
  • Proficient in data modeling techniques and database optimization. Knowledge of query optimization, indexing, and performance tuning is necessary for efficient data retrieval and processing.
  • Understanding of data security best practices and experience implementing data governance policies. Familiarity with data privacy regulations and compliance standards is a plus.
  • Strong problem-solving abilities to identify and resolve issues related to data processing, storage, or infrastructure. Analytical mindset to analyze and interpret complex datasets for meaningful insights.
  • Experience in designing and creating integration and unit test will be beneficial.
  • Excellent communication skills to effectively collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders. Ability to convey technical concepts to non-technical stakeholders in a clear and concise manner.
  • A passion for staying updated with emerging technologies and industry trends in the field of big data engineering. Willingness to learn and adapt to new tools and techniques to enhance data processing, storage, and analysis capabilities.
  • Proficient in database management systems such as SQL (Big Query is a must), NoSQL.
  • Candidate should be able to design, configure, and manage databases to ensure optimal performance and reliability.
  • Experience with data integration tools and techniques, such as ETL and ELT Candidate should be able to integrate data from multiple sources and transform it into a format that is suitable for analysis.

Nice to Have: Certifications are an advantage.

  • DP-600 (Fabric Analytics Engineer Associate) certification
  • DP-700 (Microsoft Certified: Fabric Data Engineer Associate)
  • DP-203 (Azure Data Engineer Associate) certification
  • Databricks Associate/ Spark Developer Associate certification.

Tasks:

You will be a part of the team accountable for design, model and development of whole GCP data ecosystem for one of our Client’s (Cloud Storage, Cloud Functions, BigQuery) Involvement throughout the whole process starting with the gathering, analyzing, modelling, and documenting business/technical requirements will be needed. The role will include direct contact with clients.

  • Designing and implementing data processing systems using Microsoft Fabric, Azure Data Analytics, Databricks and other distributed frameworks like Hadoop, Spark, Snowflake, Airflow, or other similar technologies. This involves writing efficient and scalable code to process, transform, and clean large volumes of structured and unstructured data.
  • Building data pipelines to ingest data from various sources such as databases, APIs, or streaming platforms. Integrating and transforming data to ensure its compatibility with the target data model or format.
  • Designing and optimizing data storage architectures, with One lake, data lakes, data warehouses, Serverless and any distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency.
  • Designing and implementing data models that support efficient data storage, retrieval, and analysis. Collaborating with data scientists and analysts to understand their requirements and provide them with well-structured and optimized data for analysis and modeling purposes.
  • Utilizing frameworks like Spark to perform distributed computing tasks, such as parallel processing, distributed data processing, or machine learning algorithms
  • Implementing security measures to protect sensitive data and ensuring compliance with data privacy regulations. Establishing data governance practices to maintain data integrity, quality, and consistency.
  • Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations.
  • Collaborating with cross-functional teams including data scientists, analysts, and business stakeholders to understand their requirements and provide technical solutions. Communicating complex technical concepts to non-technical stakeholders in a clear and concise manner.
  • Independence and responsibility for delivering a solution
  • Ability to work under Agile and Scrum development methodologies
  • Staying updated with emerging technologies, tools, and techniques in the field of big data engineering. Exploring and recommending new technologies to enhance data processing, storage, and analysis capabilities.
  • Train and mentor junior data engineers, providing guidance and knowledge transfer.

Why join us:

  • Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites.
  • 100% remote.
  • Flexibility regarding working hours.
  • Full-time position
  • Comprehensive online onboarding program with a “Buddy” from day 1.
  • Cooperation with top-tier engineers and experts.
  • Unlimited access to the Udemy learning platform from day 1.
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly.
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly.
  • Grow as we grow as a company. 76% of our managers are internal promotions.
  • A diverse, inclusive, and values-driven community.
  • Autonomy to choose the way you work. We trust your ideas.
  • Create our community together. Refer your friends to receive bonuses.
  • Activities to support your well-being and health.
  • Plenty of opportunities to donate to charities and support the environment.

How to Apply For Lingaro In 2025?

Dreaming of a Lingaro career. Follow these simple steps to apply for their 2025 opportunities:

  • Click Apply Here: Head straight to the Lingaro career page using the button below.
  • Start Your Application: Hit Apply to begin.
  • Register or Login: Create an account if youre new, or login if youre already registered.
  • Complete the Form: Fill in all required details accurately.
  • Upload Documents: Submit your resume and any other requested documents.
  • Review and Verify: Doublecheck all information before submitting.
  • Submit: Hit submit and take a step closer to your dream career.

Apply Link For Lingaro Apply Here

Job Notification Join us on Telegram: Click here

Job Notification Join us on WhatsApp: Click here

Read More:

Adobe is Hiring Software Engineers (Up to ₹12 LPA) | 0-1 years

NielsenIQ is hiring software engineer; here is how you can apply

Rippling Hiring Software Engineers – Huge Salary – Check Eligibility!

Leave a comment