Adobe Experience Platform Architect

Job Category: Tech
Location: Maryland, United States
Employment Type: Contract
Location Terms: On-site
Adobe Experience Platform Architect

Job Responsibilities:
Utilize a minimum of 5+ years of extensive experience as a data engineer to architect and design robust data solutions.
Implement Adobe Experience Platform Integration patterns to ensure seamless integration within the existing infrastructure.
Develop frameworks and reference implementations to establish best practices and streamline development processes.
Proficient in SQL for database management and optimization, ensuring efficient data retrieval and manipulation.
Code in Python and PySpark for server-side and data processing tasks, contributing to the development of scalable solutions.
Leverage modern data stack technologies such as Spark and Snowflake on cloud platforms like AWS, with a minimum of 2 years of experience.
Design and implement ETL/ELT pipelines for complex data engineering projects, utilizing tools like Airflow, dbt, and Great Expectations for enhanced efficiency.
Apply expertise in database modeling and normalization techniques to ensure data integrity and scalability.
Utilize DevOps tools such as Git, Jenkins, and GitLab CI for version control, automated testing, and continuous integration/deployment processes.

Skills that would be a plus:
Familiarity with ETL tools like Informatica, Snaplogic, or dbt for streamlined data integration processes.
Experience with cloud data warehousing platforms, particularly Snowflake or similar products, to optimize data storage and retrieval.
Exposure to workflow management tools such as Airflow for orchestrating complex data workflows efficiently.
Familiarity with messaging platforms like Kafka for real-time data streaming and processing.
Exposure to New SQL platforms like CockroachDB or Postgres for scalable and distributed database solutions.

Qualifications:
Bachelor's degree in Computer Science, Engineering, or related field.
Proven experience in data engineering roles, with a minimum of 5 years' experience.
Strong proficiency in SQL, Python, and PySpark.
Experience with modern data stack technologies (Spark, Snowflake) on cloud platforms (AWS).
Proficiency in ETL/ELT pipeline development and management.
Familiarity with database modeling, normalization techniques, and database optimization.
Experience with DevOps tools (Git, Jenkins, GitLab CI).
Excellent problem-solving skills and ability to work in a fast-paced environment.
 
Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!