Meta Data Engineer, Privacy in Helena, Montana
Privacy is one of the defining social issues of our time, and we have a responsibility to the people and businesses across the world who trust us with their data. Our organization is responsible to provide analytics and data insights for the design, implementation, monitoring, and maintenance of the company’s Privacy Programs. We work with our partners to ensure people’s privacy is at the center of our products and services, and that we’re complying with our regulatory obligations — all while maintaining Facebook’s core culture. We are looking for candidates who share our passion for tackling privacy complexities head-on, to help design, build, scale, and continuously improve, industry-leading privacy programs at Meta. As a Privacy Data Engineer, you will build analytics infrastructure and data pipelines to provide program precision, operational scale and risk mitigation to the privacy programs. This is a partnership-heavy role and through the consulting-nature of our team, you will contribute to a variety of projects and technologies, depending on our partner needs. You will be responsible for scalable, reliable, high quality data products in a rapidly growing and constantly evolving space, constantly innovating and solving problems with data, helping our partners, the privacy programs, deliver processes, tools, products, infrastructure, and decisions that help us honor people’s privacy in everything we do.The ideal candidate will have a passion for adding social value and creating impact from the ground up in a fast-paced highly collaborative team oriented environment. Additionally, you will have a proven track record of thought leadership and impact in developing similar analytics and metrics-based programs. This position is part of the Privacy Programs team.
Data Engineer, Privacy Responsibilities:
Partner with leadership, software engineers, product managers, program managers and data scientists to understand data needs.
Act as a subject matter expert in a specific domain or class of data engineering challenges, leading by example and mentoring others.
Influence short- and long-term strategy with cross-functional teams to drive impact.
Design, build and launch extremely efficient and reliable data pipelines to move data across a number of platforms including Data Warehouse, online caches and real-time systems.
Build data expertise and own data quality for allocated areas of ownership.
Architect, Build and Launch new data models that provide intuitive analytics.
Work with teams to establish data sources that serve teams' business needs, identifying and advocating for process improvements and industry best practices.
Work with a variety of Meta products, applications and warehouses, transforming raw data into finished products to help drive, investigate, monitor, report and quantify the state of risk and compliance.
Build strategic relationships with global and cross-functional teams to understand and anticipate risks to the company and deliver analytics for monitoring compliance and responding to regulations.
Communicate, at scale, through multiple mediums: Presentations, dashboards, company-wide datasets, bots and more.
Educate your partners: Use your data and analytics experience to ‘see what’s missing’, identifying and addressing gaps in their existing logging and processes.
Leverage data and business principles to solve large scale web, mobile and data infrastructure problems.
Generate information and insights from data sets and identifying trends and patterns to enhance process maturity and prioritization of related business decisions.
Experience understanding requirements, analyzing data, discovering opportunities, addressing gaps and communicating them to multiple individuals and stakeholders.
5+ years of Python development experience.
5+ years experience with Schema Design and Data Modeling.
5+ years of SQL experience.
5+ years experience in engineering data pipelines using big data technologies (Hive, Presto, Spark, Flink etc.) on large scale datasets.
5+ years of experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, Digdag, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M).
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
Experience working with cloud or on-prem Big Data/MPP analytics platform (i.e. Snowflake, AWS Redshift, Google BigQuery, Azure Data Warehouse, Netezza, Teradata, or similar).
Experience with designing and implementing real-time pipelines.
Experience with more than one coding language.
Experience querying massive datasets using Spark, Presto, Hive, Impala, etc.
Experience with data quality and validation.
Experience with notebook-based Data Science workflow.
Experience with SQL performance tuning and E2E process optimization.
Experience with Airflow.
Experience with anomaly/outlier detection.
Equal Opportunity: Facebook is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Facebook is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at firstname.lastname@example.org.