Apply Now    

Data Engineer (Microservice, Event and Metadata)

Req #: 170121279
Location: Wilmington, DE, US
Job Category: Technology
Job Description:

JP Morgan Chase is undertaking an aggressive digital transformation agenda within the Consumer and Community Bank (CCB) which serves over 50 million customers, and builds on the success of the our market leading mobile and online service offerings. JPMC is investing in innovative ways to deepen customer engagement and create the most compelling digital experience in the financial services industry. We are looking for talent that will help us position JPMC as the undisputed leader in digital financial services and payments, enabling JPMC to deliver highly personalized, real time experiences that wow our customers.

CCB is advancing towards a transformation where high velocity software engineering of business capabilities is paramount.   This strategy is being driven by the increasing use of digital platforms on the part of our customers as they bank with us and the evolution of financial products and services and their potential integration into digital banking ecosystems.  The growth of distributed data, event driven architecture and imperatives for relevant, actionable analytics will elevate the need for metadata that is engineered into our software delivery including development frameworks and continuous delivery tool chain, for visibility and intelligence engineering.


In this role, the Data Engineer will play a hands-on, lead role for establishing a modern and automated metadata strategy while influencing the implementation of those strategies and standards into the capabilities, code and tool chain so data at rest and in transit is understood, self-describing, locatable and usable from the start.  This position will expand well beyond metadata.


Position Summary:

  • The Data Engineer will be responsible for assessing and defining a holistic, yet pragmatic metadata strategy for the new banking architecture with focus on micro-services, events, API and big data reservoir / lake, to ensure all data is self-describing, locatable, traceable and is right-fit to answers consumption questions (humans and machines).  Key areas include:
    • Document a forward looking metadata strategy position paper for the New Banking Architecture including open source tools/automation, standards, taxonomies and reference data being used across CCB – from Channel/Operational to Analytic Data. 
    • Prioritize and document MVP metadata for key assets of the New Banking Architecture, including APIs, events, micro-services (private data), information flows, lineage/provenance and data catalog across strategic operational and big data assets (lifecycle management compliance;  organized and ordered catalog)
    • Recommend use and working product models for  registries and catalogs to support Micro-service, event driven architecture and big data, including aggregation and visualization.
    • Drive cultural change and get capture and use “to scale”
    • Document a clear position on integration with the CCB Metadata Repository and Technology Data Management tooling, as required
  • Research and select champion tools and open source to meet Registry and Catalog (index) metadata for data stores and modern, highly distributed data ecosystems, including data lakes with both batch and real-time integration architectures.  
    • Ensure metadata is designed-in, automated and traceable from the start
    • Ensure metadata can be located and is usable – by humans and machines
    • Ensure metadata can be leveraged to support data quality
  • Establish data flow metadata standards and drive adoption of those standards across CCB so there is a Live Data Map of understood dependencies.
  • Work directly with the Chief Development Office, to integrate metadata capture at design and run-time (CI/CD) to automate evergreen metadata. 
  • Define the pattern for contribution and enrichment of metadata, especially in the big data space which will rely on structured and crowd sourced models.
  • Gather, research and iterate principles tags and semantics to support consistency in describing ‘data and event assets’ to continuously improve operational and analytical consumption and insights
  • Collaborate on the Event Streaming as a Service with focus on standards and automation around event structure and schema metadata, event naming, registry, topic to event ‘grain’, guidance and governance
  • Collaborate on event change data capture and reconciliation patterns for Core Account Processing platforms
  • Establish best practices for data integration and consumption APIs (Data as a Service) for on premise and off premise analytics
  • Roll up sleeves and develop as needed and embrace continuous improvement/machine learning mindset of metadata as a service
  • Aggressively identify opportunities to leverage AWS and to drive those opportunities
  • Over 12 years of experience in data management and engineering
  • Expertise with meta data management and data lifecycle management
  • Expertise in engineering with distributed database technologies like Hadoop and Cassandra
  • Expertise with event driven architecture including event schemas and standards around technologies like Confluent/Kafka
  • Expertise with micro-services design and development, including API and cloud based platforms and technologies and containers
  • Strong communications and information sharing to ready the organization for distributed database event architectures and automation of metadata (metadata driven)
  • Ability to communicate with senior leaders in the organization.
  • Hands-on experience with Java, JSON/XML, Avro, Cassandra, Kafka, Spark, Akka, Hive ORC, Elastic Search, ETL tools, RDF/Taxonomies, Hadoop, BI/visualization, lambda architectures and cloud native platforms and technologies.
  • Experience implementing data pipelines in big data technologies such as Hadoop, NiFI, Spark, Kafka, AWS EMR etc.
  • Shows passion for hands-on work and figuring out how to continuously improve in the data engineering space, including leverage of Cloud providers / AWS
Apply Now    
Link for schema

Join our Talent Community

Not ready to apply? Leave your information with us and we will keep you up to date with new career opportunities.

Other Information

Apply Using LinkedIn

You can also apply using your LinkedIn® profile. It may save you some time because your information will be automatically transferred into our system. Just click on the LinkedIn logo when you get to the application screen and follow the directions.

Submit an Updated Résumé

During the application process, be sure you have an up-to-date copy of your Résumé, your cover letter and any other documentation you would like to submit.