Data Engineer

India | Technology | Full-time

Apply

About the Tech Team

The engineering team at Drip Capital is responsible for building and maintaining the online global trade financing platform that supports the interactions between buyers, sellers, financing partners, insurance agents, global retail partners, trade agents, shipping & transportation companies, supply chain and warehousing companies worldwide. 

Our primary goal is to ensure that customers are provided time-critical capital and at the same time balance requirements related to risk, fraud management and compliance. The services are accessed by customers worldwide and hence the engineering systems need to be policy-driven, easily reconfigurable and able to handle multiple regional languages. We use machine learning for risk classifications/predictions, intelligent document parsing subsystems, robotic process automation, REST APIs to connect our microservices and a cloud-based data lake and warehouse for data storage and analysis.

Our team comprises talent from top-tier institutions including Wharton, Stanford, and IITs with years of experience at companies like Google, Amazon, Standard Chartered, Blackrock, and Yahoo. We are backed by leading Silicon Valley investors - Sequoia, Wing, Accel, and Y Combinator. We are a global company headquartered in Silicon Valley along with offices in India and Mexico.

Your Role

  • Partner with multiple people like Analysts, Data Scientists, Product Managers, Leadership etc to understand their data needs.
  • Design, build and launch complex data pipelines that move data from multiple sources like MySQL, MongoDB, S3 etc
  • Design and maintain DataWarehouse and Data Lake solutions
  • Build data expertise and own data quality for your areas

Our Checklist

  • 2+ years of experience in any scripting language like Python
  • Very good knowledge of SQL
  • Strong problem solving and communication skills
  • Process-oriented with great documentation skills
  • Collaborative team spirit

Good to have

  • Previous experience setting up custom ETL pipelines
  • Knowledge of NoSQL
  • Experience working with data warehousing tools like AWS Redshift or Google BigQuery.

If you love building scalable, high performance, reliable distributed systems and want to work with people who feel the same way you do, let's talk!