Principal Engineer, Data Platform

| India | Remote
Sorry, this job was removed at 12:33 p.m. (CST) on Tuesday, May 7, 2024
Find out who’s hiring remotely Nationwide
See all Remote jobs Nationwide
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

About SupportLogic

SupportLogic SX™ is a platform that elevates customer service experience by leveraging natural language processing (NLP) and machine learning (ML). The platform seamlessly integrates with your existing ticketing system (such as Salesforce Service Cloud, Zendesk, Microsoft Dynamics), reads all the comments in every ticket to extract key signals related to customer sentiment and churn, predicting outcomes and providing proactive recommendations.  Customer Support and Success organizations use the platform to stay on top of how their customers feel about them thus improving customer relationships, products and operations.

We are a well-funded startup with investments from top tier investors in Silicon Valley (Sorenson Ventures, Sierra Ventures) and a customer list that is a who’s-who of Enterprise IT companies.  We are privileged to have customers who are not only outspoken fans of our product but also prove it by renewing every year.



Overview of role:

The team at SupportLogic is dedicated to improving the experience of today's Support professionals while helping companies serve and retain their customers through intelligent escalations, proactive interactions, and problem resolutions. In order to serve our customers in a way that enables them to serve their users, we are currently seeking customer-centric, energetic, and highly motivated individuals to join our team. As a Principal Engineer, Data Platform at SupportLogic, you will primarily be focused on design, architecture and implementation of the SupportLogic data platform on public clouds. 

 

About you (don't worry if you don't have this whole list; we expect you to learn with us):

  • 8+ years of experience architecting, designing, developing and optimizing large scale data solutions, out of which at least 5 years is in SaaS domain
  • Strong experience in building with GCP and/or AWS infrastructure and services
  • Expert knowledge and deep hands-on experience with cloud data platforms/services such as Snowflake, Azure Synapse, Informatica IICS, Databricks etc. Hands on experience with Snowflake is a must-have
  • Excellent design and development experience with SQL and NoSQL databases, OLTP and OLAP databases
  • Proficient in dimensional modeling concepts and techniques
  • Experience in design and development of custom ETL pipelines using SQL, scripting languages (Python/ Shell) and well defined API’s
  • Excellent verbal and written communication skills
  • Experience with data security and privacy best practices and threat modeling
  • Fluency developing tools, applications and backend services in Python
  • Strong experience with Snowflake and PostgreSQL including writing complex queries and performance tuning
  • Good understanding of the challenges of data quality and data testing and how to mitigate them
  • Command of Python and experience with modern data stack tools such as Airflow, dbt, Airbyte, Fivetran, Stitch, Segment, Apache Superset, Snowflake, Spark etc.
  • Education in Computer Science, Engineering or training in a related field
  • Knowledge of Pub/Sub architectures or other messaging / streaming technologies
  • Experience with AI/ML or GenAI infrastructure or applications


The work you’ll do:
 

  • Establish the best practices to secure the data and infrastructure, and evangelize the security design and development
  • Build data pipelines and structures that power predictive models and intelligence services on large-scale datasets with high uptime and production level quality
  • Integrate with CRMs to ingest customer data to build analytics data sets
  • Interface with internal Data Science, Engineering and Data Consumer teams to understand customer and product data needs
  • Own data quality throughout all stages of acquisition and processing, including data collection, ETL/data wrangling, data normalization and ground truth generation
  • Advocate for better software development tools, practices to help build a transparent engineering culture
  • Participate in scrum activities to plan, execute and release software
  • Mentor new engineers on SupportLogic Platform development



More Information on SupportLogic
SupportLogic operates in the Artificial Intelligence industry. The company is located in San Jose, CA. SupportLogic was founded in 2016. It has 80 total employees. It offers perks and benefits such as Friends outside of work, Intracompany committees, Open door policy, OKR operational model, Team based strategic planning and Group brainstorming sessions. To see all 2 open jobs at SupportLogic, click here.
Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about SupportLogicFind similar jobs