JOB LOCATION: 208 S. Akard Street, Dallas, TX 75202 [and various unanticipated locations throughout the U.S.; may work from home]
DUTIES: Interpret the requirements of various big data analytic use cases and scenarios, and drive the design and implementation of specific data models to ultimately help drive better business decisions through insights from a combination of external and AT&T’s data assets. Develop necessary enablers and data platform in the big data lake environment and has the responsibility of maintaining its integrity during the life cycle phases. Define data requirements, gather and mine large scale of structured and unstructured data, and validate data by running various data tools in the big data environment. Support the standardization, customization and ad-hoc data analysis, and develop the mechanisms in ingest, analyze, validate, normalize and clean data. Implement statistical data quality procedures on new data sources, and apply rigorous iterative data analytics. Support data scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Work with big data policy and security teams and legal to create data policy and develop interfaces and retention models which requires synthesizing or anonymizing data. Develop and maintain data engineering best practices and contribute to insights on data analytics and visualization concepts, methods and techniques. Technical design to ingest data into Palantir Foundry platform on Azure. Participate in creating ingestion strategy and technology patterns. Provide technical direction (architecture and design) for projects ingesting data into Palantir Foundry. Conduct design, architecture and code reviews. Engage with the vendor to meet AT&T requirements and deliverables. Create tasks for Data Replication and Data Synchronization. Utilize Oracle, Teradata, Vertica, Azure DataLake, Databricks and Snowflake. Utilize Hbase and Hbase Shell. Develop UNIX shell scripts. Develop database load scripts: VSQL for Vertica. Utilize BTEQ, Mload, Fastload and fast export scripts for Teradata. Utilize SnowSQL for Snowflake and PySpark for Databricks. Develop schedules using workload scheduling tools: TWS.
Qualifications:
REQUIREMENTS: Requires a Master’s Degree, or foreign equivalent degree, in Electrical and Electronic Engineering, Computer Science, or Computer Engineering and three (3) years of experience in the job offered or three (3) years of experience in a related occupation creating tasks for Data Replication and Data Synchronization; utilizing Oracle, Teradata, Vertica, Azure DataLake, Databricks, Snowflake and Palantir Foundry; utilizing Hbase and Hbase Shell; developing UNIX shell scripts; developing database load scripts: VSQL for Vertica; utilizing BTEQ, Mload, Fastload and fast export scripts for Teradata; utilizing SnowSQL for Snowflake and PySpark for Databricks; and developing schedules using workload scheduling tools: TWS.
Our Principal-Big Data Engineers earn between $158,200 – $254,300 yearly. Not to mention all the other amazing rewards that working at AT&T offers.
Joining our team comes with amazing perks and benefits:
• Medical/Dental/Vision coverage
• 401(k) plan
• Tuition reimbursement program
• Paid Time Off and Holidays (based on date of hire, at least 23 days of vacation each year and 9 company-designated holidays)
• Paid Parental Leave
• Paid Caregiver Leave
• Additional sick leave beyond what state and local law require may be available but is unprotected
• Adoption Reimbursement
• Disability Benefits (short term and long term)
• Life and Accidental Death Insurance
• Supplemental benefit programs: critical illness/accident hospital indemnity/group legal
• Employee Assistance Programs (EAP)
• Extensive employee wellness programs
• Employee discounts up to 50% off on eligible AT&T mobility plans and accessories, AT&T internet (and fiber where available) and AT&T phone
AT&T is an Affirmative Action/Equal Opportunity Employer, and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V
*np*