Apply Now    

Digital Intelligence - Data Engineer

Req #: 170097220
Location: New York, NY, US
Job Category: Digital
Job Description:
JPMorgan Chase & Co. (NYSE: JPM) is a leading global financial services firm with assets of $2.6 trillion and operations worldwide. The firm is a leader in investment banking, financial services for consumers and small business, commercial banking, financial transaction processing, and asset management. We serve millions of consumers in the United States and many of the world's most prominent corporate, institutional and government clients under our J.P. Morgan and Chase brands. Information about JPMorgan Chase & Co. is available at
Chase Consumer & Community Banking serves nearly 66 million consumers and 4 million small businesses with a broad range of financial services, including personal banking, investment advice, small business lending, mortgages, credit cards, payments and auto financing. The Digital team is responsible for building innovative platforms and developing new products that make banking and payment tasks simpler and more personalized for our customers, as well as deepen customer engagement and loyalty with more relevant offers and services. We function similar to a fintech start-up in our brand new offices that inspire collaboration, transparency, agile development, and a fun working environment.
The Digital Intelligence team’s mission is to deeply personalize the user experience of our millions of customers through the use of the firm’s massive data, machine learning and proprietary data platforms. Whether it’s building a financial graph of consumers and small businesses, optimizing ad targeting on and paid media sites, recommending the most relevant hotels, or detecting fraudulent behavior, we work at the intersection of statistics, machine learning and engineering to tackle some of the most challenging and interesting problems you will find in digital banking, commerce and payment. Many companies claim that they work on “big data” and “data science”. We live and breathe them every day.
The ideal candidate has created production-level low latency and highly scalable big data pipelines to process and analyze terabytes of data.  They are hands-on, have a solid understanding of software engineering principles and love learning new skills along the way. They feel comfortable working with a diverse team of data scientists, product managers and business partners.



  • MS+ in Computer Science, Engineering or a quantitative discipline.
  • Proven knowledge of the design and implementation of big data architecture, as demonstrated by either industry experience or coursework/academic research.
  • 2+ years experience with the Apache Hadoop ecosystem including Spark, MapReduce, Hive, Kafka, Solr, ElasticSearch, HBase, Cassandra, and Flink. Experience with real-time systems a bonus.
  • Must be able to write clean and concise code in at least two of the following: Python, Java, and Scala.
  • Worked with tools like R Shiny, Plotly, Bokeh etc., to build visualization dashboards. Experience with tools like Apache Nifi, Apache Beam, and Airflow a bonus.
  • You are curious, have a research mindset, and enjoy working on open-ended problems.


Apply Now    
Link for schema

Join our Talent Community

Not ready to apply? Leave your information with us and we will keep you up to date with new career opportunities.

Other Information

Apply Using LinkedIn

You can also apply using your LinkedIn® profile. It may save you some time because your information will be automatically transferred into our system. Just click on the LinkedIn logo when you get to the application screen and follow the directions.

Submit an Updated Résumé

During the application process, be sure you have an up-to-date copy of your Résumé, your cover letter and any other documentation you would like to submit.