Responsibilities
- Design and maintain robust data collection pipelines from a wide range of sources, including websites, documents, APIs, and raw sensor data
- Extract and structure information from unstructured or semi-structured formats into clean, standardized schemas
- Handle real-world data challenges like pagination, rate limits, CAPTCHAs, noise, missing values, and inconsistent formatting
- Clean, filter, and validate raw data to ensure high quality, consistency, and usability across our systems
- Develop small tools and utilities to support and automate data collection workflows
- Support the creation and maintenance of labeling pipelines for ML applications
- Collaborate with engineering and product teams to optimize data storage and access patterns
- Document data sources, collection methodologies, and processing procedures for reproducibility
Requirements
- 0–2 years of experience in software development, data engineering, or related fields
- Degree in Computer Science, Computer Engineering, Information Systems, or equivalent technical background
- Understanding of HTML, CSS selectors, and how web pages are structured
- Strong problem-solving skills and an eye for detail
- Ability to work in a fast-paced environment and manage shifting priorities
Technical Skills
- Proficiency in Python, especially for data manipulation and automation
- Experience (academic or professional) with data extraction using tools like `requests`, `BeautifulSoup`, or similar
- Familiarity with REST APIs and the HTTP protocol
- Experience with data cleaning techniques such as:
- Handling missing or inconsistent values
- Removing duplicates and outliers
- Standardizing formats (e.g., dates, units, text normalization)
- Validating data against schemas or expected ranges
- (Optional) Exposure to browser automation tools like Selenium or Playwright
Nice to Have
- Experience with web scraping libraries/frameworks like Scrapy, Playwright, or Selenium
- Familiarity with proxy usage, headless browsers, or CAPTCHA bypass techniques
- Understanding of database systems (SQL or NoSQL)
- Exposure to rapid prototyping tools like Streamlit
- Previous experience working with or around industrial equipment or maintenance systems
Similar Jobs
What We Do
Tractian is a machine intelligence company that offers industrial monitoring systems. Tractian builds streamlined hardware-software solutions to give maintenance technicians and industrial decision-makers comprehensive oversight of their operations. It is democratizing access to sophisticated real-time monitoring and asset operations tools.
Tractian's solutions are used in environments that address a combined total of 5% of global industrial output. The company’s broad market reach is evidenced in its customer base from various industries, such as John Deere, Procter & Gamble, Caterpillar, Goodyear, Carrier, Johnson Controls, and Bimbo, the owner of the brands Little Bites and Thomas Bagels. Tractian's customers see a 6-12x ROI with savings of $6,000 per monitored machine annually on average.
In a major milestone and a first for the industry, Tractian launched the AI-Assisted Maintenance category in the industrial sector. In this new paradigm, artificial intelligence identifies machine problems and suggests preventive actions to be taken, giving invaluable insight and support to maintenance professionals. It is important to highlight that the intent of Assisted Maintenance is firmly rooted in augmenting maintenance professionals to provide more assertive diagnosis with human-in-the-loop feedback.
Tractian's mission is to elevate this category of workers in a highly impactful way. The Assisted Maintenance category will provide unimaginable support for maintenance professionals. By combining shop floor expertise with our technology, maintainers will be able to anticipate and address issues with unprecedented accuracy and speed







