Principal Data Engineer
před 14 hodinami

Medallia is the pioneer and market leader in Experience Management. Our award-winning SaaS platform, Medallia Experience Cloud, leads the market in the understanding and management of experience for candidates, customers, employees, patients, citizens and residents.

We are more than a software company. We want to be known as a company that does the right thing, no matter the challenge or controversy.

We are committed to creating a culture that values every person and every experience. Individual life experiences shape the way we interact with the world, which is why we encourage people to bring their whole selves to work each day.

The strength of our global workforce is the most significant contributor to our success. We believe : Every Experience Matters.

Talent is Everywhere. All Belong Here.At Medallia, we hire the whole person.


  • As a Principal Data Engineer on our Athena Platform team, you will be responsible for architecting and developing the building blocks of the next generation of products at Medallia.
  • You will create and maintain optimal data pipeline architecture.
  • You will identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • You will build analytics tools that utilize the data pipeline to provide actionable insights into customer feedback applying sentiment analytics, NLP and Clustering methodologies.
  • You will keep our data separated and secure across national boundaries through multiple data centers.
  • You will create data tools for analytics and data scientist team members that assist them in building and optimizing our product.
  • You will collaborate with product and design teams to build innovative new features and products for the Customer Experience space.

  • Have performed software development in a production environment using a mainstream object-oriented programming language ( Java, Java Libraries or Python)
  • BA / BS degree in Computer Science or a related technical discipline, or equivalent practical experience.Strong experience in a technical role working on distributed systems and web services.
  • 3+ years of data engineering experience include distributed systems like Spark, Hadoop, Kafka etc.
  • Experience with ETL / ELT pipelines.
  • Strong experience with code quality best practices and implementing testing architectures and modularization on large codebases.

  • Experience with big data tools : Hadoop, Spark, Kafka.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools such Airflow.
  • Experience with stream-processing systems such as Spark-Streaming.
  • Nahlásit tuto nabídku

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Můj e-mail
    Kliknutím na "Pokračovat", souhlasíte s tím, že neuvoo sbírá a zpracovává vaše osobní údaje, které jste poskytli v tomto formuláři, aby vytvořili neuvoo účet a přihlásili vás k odběru emailových upozornění v souladu s naší Ochranou Osobních Údajů . Váš souhlas můžete vzít kdekoliv zpět, následováním těchto kroků .