BlockFi is looking for a Senior Data Architect to join our growing team!
About the Team
The Data Platforms organization is responsible for the end-to-end data needs of BlockFi products and services, composed of following teams: Data Engineering, Data Strategy, Master Data Management, & Machine Learning Eng. The Senior Data Architect will be a part of the Enterprise Data Engineering organization serving all of BlockFi data needs. The Data Engineering Team designs, builds and supports data platforms, products, pipelines, governance frameworks that powers analytics, business insights, data science and machine learning. We aim to make data a competitive advantage for BlockFi by empowering our business partners with industry-leading insights and tools so that they can make fast, bold decisions with trusted data to create unsurpassed client experiences and grow our market share. We enable automation at scale that helps reduce risk, improves speed and eliminates manual processes.
Your Mission
As a Senior Data Architect, you will be responsible for setting and executive the enterprise data architecture, helping build and support the enterprise data platform, needed to enable data analysts, data scientists and other coworkers across BlockFi to make data driven decisions. You will design and implement technical solutions, and mentor junior engineers. We are looking for proactive, collaborative, and adaptive engineers who have real world distributed systems experience at scale.
- Architect/Design:
- Work with diverse stakeholders to ensure our data platforms are architected for availability, reliability, resilience, scalability, performance, and security from the ground up.
- Write design and architectural proposals and review proposals from other data engineers. Ensure tradeoffs are clearly and publicly documented, and that designs are aligned with business goals.
- Deliver and Own Solutions:
- Responsible for creating and executing on end to end data architecture for the enterprise data platform.
- Evangelize best practices in data architecture, ETL architecture, streaming frameworks and other data pipeline architectures.
- Adhere to quality standards through cross-team communication, mentoring, code review, and backlog grooming.
- Accountable for system availability and monitoring system health; ensure alerts, metrics, and runbooks are in place; and debug issues in production.
- Adapt:
- Quickly learn new tools and technologies, develop an understanding of existing systems, and identify and tackle high impact work.
- Proactively seek to learn about the company, products, processes, and culture. Align technical & architectural decisions with business goals.
Your Expertise
- Technical Breadth as well as Depth in Several Areas: 5+ years of experience as a data architect with extensive experience in architecting data warehouses, data lakes, and data platforms for consumption by analytics and data science/ML. Extensive experience in architecting and building data pipelines (batch ETL, micro batches, and real-time streaming).
- Technical Ownership: Experience owning data platforms end-to-end (from data generation/ingestion to curates/aggregated layers), designing, estimating, implementing, testing, maintaining, debugging, and supporting high-quality software in production. Experience in building foundational, curated and aggregated data layers enabling self-service business intelligence (easily consumable by non-technical users)
- Communication: Excellent communication, presentation and interpersonal skills.
- Collaboration: Empathetic and does the legwork required for building consensus. Always seeks out feedback on technical designs, solutions, and code.
- Initiative and focus on outcomes: Works independently and takes initiative while maintaining transparency and collaboration. Can deliver high quality solutions without assistance. Proactively identifies problems and comes to conversations with possible solutions.
- Adaptive: Ability and motivation to quickly learn new languages, technologies and tools. Pragmatic bias toward outcomes, and technical decisions that solve real business problems.
- Successful candidates will have:
- Strong knowledge of data architecture on cloud data platforms (preferably AWS ecosystem, data bricks, snowflake)
- Strong skills in Python, ETL transformations, data modeling, & feature engineering
- Experience with SQL adapters such as Ecto and managing SQL schema changes with code
- Experience with AWS cloud services: S3, EC2, RDS, Aurora, Redshift (or other cloud services)
- Experience with real-time stream-processing systems: Kinesis, EventBridge, Confluent, Kafka or similar.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing data pipelines, architectures and data sets.
- Strong business acumen, critical thinking and technical abilities along with problem solving skills.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Your Perks:
We benefit from the great work our employees do each day. That is why we are committed to providing a variety of awesome benefits to help them live their best lives.
- Competitive salary because we value your experience and expertise
- Unlimited vacation / sick days because everyone deserves time for R&R
- Flexible work environment because we are a geographically dispersed team and we believe in balance
- A close-knit team of enthusiastic, collegial and driven people to work alongside because teamwork makes the dreamwork