Corporate Vice President, Sr Data Engineer
Location: Lebanon
Posted on: June 23, 2025
|
|
Job Description:
Our New York Life culture has laid the foundation for over 175
years of commitment to our employees, agents, policy owners, and
the communities where we live and work. Here you become a valued
part of a welcoming, inclusive, and caring organization with a
long-standing legacy in stability and growth. The strength revolves
around our diversified, multi-dimensional business portfolio that
goes beyond life insurance. As a Fortune 100 company and industry
leader, we provide an environment where you can explore your career
ambitions, offering opportunities to tackle meaningful challenges
and stretch your skills while balancing work and life priorities.
You will be part of an inclusive team guided by our belief to
always be there for each other–providing the support and
flexibility to grow and reach new heights while making an impact in
the lives of others. You are our future, and we commit to investing
in you accordingly. Visit our LinkedIn to see how our employees and
agents are leading the industry and impacting communities. Visit
our Newsroom to learn more about how our company is constantly
evolving to meet our clients and employees’ needs. As part of
AI&D, youll have the opportunity to contribute to
groundbreaking initiatives that shape New York Lifes digital
landscape. Leverage cutting-edge technologies like Generative AI to
increase productivity, streamline processes, and create seamless
experiences for clients, agents, and employees. Your expertise
fuels innovation, agility, and growth — driving the companys
success . Build, expand, and optimize data pipeline architecture
using Amazon Web Services cloud, Python and Pyspark. Utilize tools
like Amazon Relational Database Service (AWS DMS), AWS Database
Migration Service (AWS DMS), AWS Glue, Amazon Simple Storage
Service (AWS S3), AWS Lambda triggers & functions, etc. Leverage
Data Engineering strategies, Data Lake/Lakehouse/Data Warehouse
technologies, and streaming/real-time data. Use SQL for data
analysis. Proficient in other cloud technologies like databricks
etc. What you’ll do: • Design and develop enterprise infrastructure
and platforms required for data engineering. • Use AWS Cloud
technologies to support data needs for expansion of Machine
Learning/Data Science capabilities, applications/mobile
apps/systems, BI/analytics, and cross-functional teams. • Create
methods and routines to transition data from on-premise systems to
the AWS Cloud. • Store data in AWS Cloud platform and develop
transformational logic based on business rules. • Create and
maintain optimal data pipeline architecture. Test and implement
data environments. • Assemble large, complex data sets to meet
functional/non-functional business requirements. • Build the
infrastructure required for optimal ETL/ELT (extraction,
transformation, and loading) of data from a wide variety of data
sources using SQL and AWS big data technologies. • Utilize AWS
Cloud tools like Amazon Relational Database Service (AWS DMS), AWS
Database Migration Service (AWS DMS, AWS Glue, Amazon Simple
Storage Service (AWS S3), AWS Lambda triggers & functions, etc. •
In-depth knowledge of databricks. • Should be able to develop the
High level and low level data architecture. • Develop and debug
code in PySpark/Spark/Python. Use SQL and/or Postgres. • Use Data
Pipeline, workflow management, and orchestration service tools. •
Proficient with SQL for data analysis. • Collaborate with Cloud
Data Architects, Data Engineers, DBAs, Analytics & Software
Engineers. • Interact with stakeholders, Product Management, and
Design teams to understand requirements and resolve issues. What
you’ll bring: • 8 years as a Cloud Data Engineer, Data Engineer,
Data Architect, AWS Engineer, etc. • 3 years with Amazon Relational
Database Service (RDS), AWS Database Migration Service (AWS DMS),
Amazon Simple Storage Service (S3) Data Lake, AWS Lambda, etc. in
shared service, hybrid environments. • Highly scalable Data stores,
Data Lake, Data Warehouse, Lakehouse, and unstructured datasets. •
Strong expertise in databricks (spark on databricks) delta lake
architecture. • Proficient in Python, Scala, or SQL for building
data pipelines and processing data. • Data integration, data
processing, data streaming, message queuing, and/or ETL/ELT. • Deep
knowledge of cloud platforms (Azure, AWS, or GCP) and experience in
managing cloud-based data storage and compute resources. •
Experience with GIT. • Ability to lead a team of developers. •
Bachelor’s degree in Data Science, Computer Engineering, or related
field preferred.
Keywords: , East Brunswick , Corporate Vice President, Sr Data Engineer, IT / Software / Systems , Lebanon, New Jersey