Senior Data Engineer

  • Social Finance, LLC
  • San Francisco, California
  • 01/15/2026
Information Technology Telecommunications SQL Python Data Scientist Testing

Job Description

Job Duties: Design and build a data warehouse using computingbased data cloud (Snowflake) for machine learning, data analysis, self-serve analytics and reporting needs. Develop Extract, Transform, Load (ETL) pipelines to orchestrate execution of scripts, automate data transformation and loading data into data warehouses. Perform data cleansing, validation, testing and schema design to ensure accuracy and reliability of data insights. Manage and build automated workflows and monitor these pipelines and DAGs (Directed Acyclic Graphs). Identify and work on improvements by automating manual processes to optimize data delivery, re-design infrastructure for more scalability using python & sql. Collaborate with cross-functional teams including data engineers, analysts and data scientists to understand the business logic and needs to create an effective and efficient pipeline and data model. Create ad-hoc report/data visualizations based on user requirements. Participate in ETL flow design reviews and recommend solutions to improve processes. Perform data quality checks and alerting to identify issues and resolve bugs in a timely manner. Participate in source code and design reviews in a software development lifecycle (SDLC) driven environment using technical, functional, and domain knowledge. Identify issues or gaps and suggest improvements to the team processes. Provide support for customer requests for all the products handled by the team. Full-time telecommuting is an option. Minimum Requirements: Masters degree (or its foreign degree equivalent) in Analytics, Data Science, Engineering (any field), or a related quantitative discipline, and six (6) months of experience in the job offered or in any occupation in related field. Special Skill Requirements: 1. Data warehouse design and dimensional modeling 2. Extract, Transform, Load (ETL) 3. SQL 4. Python 5. Data Visualization (Tableau or Power BI) 6. Gitlab 7. Azure 8. PySpark 9. OLAP and OLTP system experience (MySQL Database, Postgres DB, or Snowflake) 10. Machine Learning 11. Data Pipeline - Performance & Scalability Techniques. Any suitable combination of education, training and/or experience is acceptable. Full-time telecommuting is an option. Salary: $177,813.00 - $195,594.00 per annum. Submit resume with references to: Req.# 53.2 at: ATTN: HR,