Empleos

Principal Data Platform Engineer

Posted 7 days ago USD 189,000 - 236,000 / year
Motive

About the Job:

As a Principal Data Platform Engineer, you will be involved and responsible for full ownership and driving key data platform initiatives and life cycle of the data management including data ingestion, data processing, data storage, querying system, cost reduction efforts towards delivering product features to internal and external Motive customers.

We are looking for a technical leader in the Data Platform area who has built the full data ingestion, transformation and analytics systems over AWS and Kubernetes in multiple companies. This individual would have faced and solved multiple challenges affecting many feature areas during this process using the best practices. This individual will be responsible for contributing significantly to drive the Motive’s Data Platform vision.


The Data Platform team works in three areas: 1. Build scalable systems and services for data ingestion, access, processing and query to enable data driven product features. 2. Collaborate and work closely with the various stakeholders and our backend product teams to improve and add features to the platform.


What You’ll Do:

  • Work with other leaders in the Platform area to define and plan out the long term strategy for Data Platform.
  • Design and develop scalable distributed systems and frameworks for data management
  • Focus on addressing fault-tolerance and high availability issues, and work on scaling ingestion pipelines, improving and adding features to ETL framework while maintaining SLAs on performance, reliability, and system availability.
  • Collaborate with engineers across teams to identify and deliver cross-functional features
  • Participate in all aspects of the software development life cycle, from design to implementation and delivery.


What We’re Looking For:

  • 8+ years Hands-on software engineering experience
  • Backend programming skills including multi-threading, concurrency, etc and proficient in one or more of Python
  • Strong CS fundamentals including data structures, algorithms, and distributed systems
  • Experience in designing, implementing, and operating highly scalable software systems and services
  • Experience building systems using technologies like Apache Kafka, Apache Spark, Airflow, Kubernetes
  • Excellent troubleshooting skills and track record of implementing creative solutions
  • Hands on experience with containerized platforms like Docker and Kubernetes
  • BS in Computer Science or a related field; Masters preferred
  • Excellent verbal and written skills. You collaborate effectively with other teams and communicate clearly about your work.


This role can be based out of any of the following locations -

  • Hybrid - Austin, Texas; Hybrid - Buffalo, New York; Hybrid - Nashville, Tennessee; Hybrid - San Francisco, California; Hybrid - Seattle


Login to Apply Now