logo

View all jobs

Sr. Data Engineer

San Bruno, CA
Hello,
 
One of our direct client is urgently looking for Sr. Data Engineer  @ San Bruno, CA
 
TITLE: Sr. Data Engineer
Duration: 6 to 12+ months
LOCATION: San Bruno,CA

Rate: DOE
 
Description:
Looking for a backend data engineer with Adobe Analytics or Google Analytics knowledge.

Must-haves
Excellent knowledge with SQL and Hive (HiveQL)
Ability to build data pipelines using PySpark, Hive, and Scala Spark
Proficient knowledge of a programming language preferably Python or Java or Scala
Good understanding of Adobe Analytics or Google Analytics

Good-to-haves
Ability to code in JavaScript would be a big plus
Tableau, Looker and PowerBI experience
Knowledge of Google Cloud Platform (GCP)
Prior experience with Marketing datasets and building Marketing reporting


Position Summary
• Very Strong engineering skills. Should have an analytical approach and have good programming skills.
• Provide business insights, while leveraging internal tools and systems, databases and industry data
• Minimum of 5+ years’ experience. Experience in retail business will be a plus.
• Excellent written and verbal communication skills for varied audiences on engineering subject matter
• Ability to document requirements, data lineage, subject matter in both business and technical terminology.
• Guide and learn from other team members.
• Demonstrated ability to transform business requirements to code, specific analytical reports and tools
• This role will involve coding, analytical modeling, root cause analysis, investigation, debugging, testing and collaboration with the business partners, product managers other engineering team.

Must Have
• Strong analytical background
• Self-starter
• Must be able to reach out to others and thrive in a fast-paced environment.
• Strong background in transforming big data into business insights


Technical Requirements
• Knowledge/experience on Teradata Physical Design and Implementation, Teradata SQL Performance Optimization
• Experience with Teradata Tools and Utilities (FastLoad, MultiLoad, BTEQ, FastExport)
• Advanced SQL (preferably Teradata)
• Experience working with large data sets, experience working with distributed computing (MapReduce, Hadoop, Hive, Pig, Apache Spark, etc.).
• Strong Hadoop scripting skills to process petabytes of data
• Experience in Unix/Linux shell scripting or similar programming/scripting knowledge
• Experience in ETL/ processes
• Real time data ingestion (Kafka)

Nice to Have
• Development experience with Java, Scala, Flume, Python
• Cassandra
• Automic scheduler
• R/R studio, SAS experience a plus
• Presto
• Hbase
• Tableau or similar reporting/dash boarding tool
• Modeling and Data Science background
• Retail industry background

Education
BS degree in specific technical fields like computer science, math, statistics preferred

Share This Job

Powered by