ELife
Software Engineer (Remote)
Software Engineer | ELife | India
Our Company:
A fast-growing start-up headquartered in San Francisco, CA, USA in theheart of Silicon Valley. We recruit worldwide as our customer base isglobal.
...Software Engineer | ELife | India
Our Company:
A fast-growing start-up headquartered in San Francisco, CA, USA in the heart of Silicon Valley. We recruit worldwide as our customer base is global.
Vision: Reliable ground transportation services globally with all types of vehicles.​
Mission: Empower high quality local fleets​
Coorporate Culture: team first
- Partner-centric​
- Team collaboration,​
- Never “not my job”, end-to-end ownership,​
- Continuous learning and improvement,​
- Hard-working and pragmatic,​
- Don’t be a middleman,​
- Result-driven​
Job title: Crawler Engineer
A Crawler Engineer is primarily responsible for designing and developing web crawler systems to scrape, clean, and analyze data from various platforms. This position requires a deep understanding of how web crawlers work, familiarity with common anti-crawling techniques and countermeasures, and the ability to handle large-scale data processing.
**Primary Responsibilities**
- Design and develop efficient web crawler systems to meet business data scraping requirements.
- Conduct scraping strategy analysis on target websites and formulate optimal scraping plans.
- Maintain and optimize existing crawler systems to improve data scraping speed and accuracy.
- Clean and process scraped data to ensure data quality and availability.
- Keep track of and research the latest crawler technologies and anti-crawling mechanisms to continuously enhance the performance of crawler systems.
**Requirements**
- 5 years experience in similar positions
- Bachelor’s degree or above in Computer Science or a related field, with a solid foundation in computer science.
- Proficient in Python programming, familiar with commonly used crawler frameworks (such as Scrapy, PySpider, etc.) and information extraction techniques (such as regular expressions).
- Familiar with HTTP/HTTPS protocols, Cookie mechanisms, and web scraping principles.
- Proficient in JavaScript, XPath.
- Proficient in databases such as MySQL and BigQuery.
- Have knowledge of common anti-crawling techniques and countermeasures, able to tackle various anti-crawling challenges.
- Experience in designing and developing distributed systems, familiar with multithreading, asynchronous programming, and other technologies.
- Have good problem-solving skills and a strong teamwork spirit, able to work under pressure.
- Preference will be given to candidates with experience in scraping data from large platforms and handling massive datasets.
Show more
Show less
Related Jobs
See more All Other Remote Jobs-
NewSave
- Save
- Save
- Save
- Save
- Save
- Save
- Save
- Save
- Save