Senior Data Engineer at Talkdesk (Remote)
At Talkdesk, we are courageous innovators focused on redefining customer experience, making the impossible possible for companies globally. We champion an inclusive and diverse culture representative of the communities in which we live and serve. And, we give back to our community by volunteering our time, supporting non-profits and minimizing our global footprint. Each day, thousands of employees, customers and partners all over the world trust Talkdesk to deliver a better way to great experiences.
We are recognized as a cloud contact center leader by many of the most influential research organizations, including Gartner and Forrester. With $498 million in total funding, a valuation of more than $10 Billion, and a ranking of #17 on the Forbes Cloud 100 list, now is the time to be part of the Talkdesk legacy to help accelerate our success in a new decade of transformational growth.
At Talkdesk, our Engineering team follows a micro-service architecture approach to build the next generation of Talkdesk, with vertical teams responsible for all the decisions under their services. Through our Agile Coaches, we promote agile and collaborative practices, we are huge fans of Scrum, pair programming and we won’t let a single line of code reach production without peer code reviews. We strongly believe that the only true authority stems from knowledge, not from position and we always treat others with respect, deference and patience.
Responsibilities:
- Develop, deploy and maintain a Data Mesh solution to power Talkdesk’s Data Science, BI and Reporting products;
- Design batch or streaming dataflows capable of processing large quantities of fast moving unstructured data;
- Monitoring dataflows and underlying systems, promoting the necessary changes to ensure scalable, high performance solutions and assure data quality and availability;
- Work closely with the rest of Talkdesk’s engineering to deliver a world class Data Mesh solution.
Requirements:
- Strong understanding of distributed computing principles and distributed systems;
- At least, 5 years of experience in the field;
- Building stream-processing systems, using solutions such as Spark-Streaming, Flink, Kafka Streams, Storm or similar;
- Experience with Big Data processing frameworks such as Hadoop, Spark or Samza;
- Good knowledge of Big Data analytical tools, such as Hive, Impala, Presto or Drill;
- Experience with integration of data from multiple data sources;
- Experience with traditional RDBMS and data modeling;
- Experience with Data Warehouses and related concepts. Knowledge of Redshift or other Data Warehousing solutions;
- Experience with NoSQL databases, such as MongoDB, Cassandra, and HBase;
- Experience with messaging systems, such as Kafka or RabbitMQ;
- Experience with cloud environments such as AWS or Google Cloud;
- Strong written and verbal English communication skills.
Nice to have / Pluses
- BS/MS Degree in Computer Engineering, Computer Science, Applied Math, or a similar area;
- Experience in Agile development methodology/Scrum;
- Good understanding of Lambda and Kappa Architectures, along with their advantages and drawbacks;
- Management of Hadoop, Spark, Flink clusters with all included services.