View all jobs

Sr. Data Engineer

Sunnyvale, CA

We are urgently looking for a Data Engineer for our Direct client requirement

TITLE:  Data Engineer
LOCATION: Sunnyvale, CA
DURATION: 6+ months
Rate: DOE

Job Description:
As part of the International eCommerce Data Engineering team, you'll be responsible for design, development and operations of large-scale data pipeline and systems. You will be focusing on real-time and batch data pipelines, streaming analytics, distributed big data and underlying platform and infrastructure that is located on-prem and in the cloud. You'll interact with the engineers, product managers, BI developers and architects to provide scalable robust technical solutions.

Essential Job Functions & Responsibilities :

Min 6-8 years of Big data development experience
Demonstrates up-to-date expertise in Data Engineering, complex data pipeline development
Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
Experience with Java, Python, Hive and Spark to write data pipelines and data processing layers
Demonstrates expertise in writing complex, highly-optimized queries across large data sets
Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka,Druid, Spark, just to name a few
Highly Proficient in SQL
Experience with Cloud Technologies (Azure)
Experience with Relational, NoSQL/in memory data stores would be a big plus ( Oracle, Cassandra, Druid)
Provides and supports the implementation and operations of streaming and batch data pipelines and analytical solutions
Performance tuning experience of systems working with large data sets
Experience with clickstream data processing
Experience with Data Governance ( Data Quality, Metadata Management, Security, etc.)
Experience in developing RESTful API data service
Retail experience is huge plus
Must Have
  • Strong analytical background
  • Self-starter
  • Must be able to reach out to others and thrive in a fast-paced environment.
  • Strong background in transforming big data into business insights
Technical Requirements
  • Knowledge/experience on Teradata Physical Design and Implementation, Teradata SQL Performance Optimization
  • Experience with Teradata Tools and Utilities (FastLoad, MultiLoad, BTEQ, FastExport)
  • Advanced SQL (preferably Teradata)
  • Experience working with large data sets, experience working with distributed computing (MapReduce, Hadoop, Hive, Pig, Apache Spark, etc.).
  • Strong Hadoop scripting skills to process petabytes of data
  • Experience in Unix/Linux shell scripting or similar programming/scripting knowledge
  • Experience in ETL/ processes
  • Real time data ingestion (Kafka)

Nice to Have

BS degree in specific technical fields like computer science, math, statistics preferred
Powered by