Apply Now    

Big Data / ETL Software Engineer

Req #: 170118931
Location: Plano, TX, US
Job Category: Technology
Job Description:

JPMorgan Chase & Co. (NYSE: JPM) is a leading global financial services firm with assets of $2.6 trillion and operations worldwide. The firm is a leader in investment banking, financial services for consumers and small business, commercial banking, financial transaction processing, and asset management. A component of the Dow Jones Industrial Average, JPMorgan Chase & Co. serves millions of consumers in the United States and many of the world’s most prominent corporate, institutional and government clients under its J.P. Morgan and Chase brands. Information about JPMorgan Chase & Co. is available at http://www.jpmorganchase.com/.

Commercial Banking (CB) serves more than 30,000 clients, including corporations, municipalities, financial institutions, and not-for-profit entities with annual revenues generally ranging from $20 million to $2 billion. Our Commercial Bankers serve these clients by operating in 14 of the 15 top U.S. major markets. Our professionals' industry knowledge and experience combined with our dedicated service model, comprehensive solutions, and local expertise to make us the #1 commercial bank in our retail branch footprint.

 

Commercial Banking IT is looking for a Big Data Lead/Architect/Developer with skills and experience with large-scale Hadoop-based data platforms who will be responsible for design, development and testing of a next generation enterprise data hub and reporting and analytic applications.  This individual will work with an existing development team to create the new Hadoop-based platform and migrate the existing data platforms and provide production support.  The current platform uses many tools including Oracle SQL, SQL Server, SSIS, and SSRS/SSAS.  The candidate will be accountable for design, development, implementation and post-implementation maintenance and support. The candidate will develop and test new interfaces, enhancements/changes to existing interfaces, new data structures, and new reporting capabilities.

  • Bachelor's degree in a technical or quantitative field with preferred focus on Information Systems
  • Minimum 2+ Experience in a Big Data technology (Hadoop, YARN, Sqoop, Spark SQL, Nifi, Talend, Hive, Impala, etc.)
  • 3-5+ years of Experience in Java Development
  • 2+ years of Experience with Python is preferred
  • Experience performing data analytics on Hadoop-based platforms is preferred
  • Experience in writing SQL's
  • Experience in Map Reduce
  • Experience in implementing complex ETL transformations in Hadoop platform
  • Strong Experience with UNIX shell scripting to automate file preparation and database loads
  • Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues
  • Experience in implementing distributed and scalable algorithms (Hadoop, Spark) is a plus
  • Familiarity with relational database environment (Oracle, SQL Server, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
  • Familiarity with NoSQL database platforms is a plus
  • Experience to ETL tools is a plus
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Proficiency across the full range of database and business intelligence tools; publishing and presenting information in an engaging way is a plus
  • Experience with multiple reporting tools (QlikView/QlikSense, Tableau, SSRS, SSAS, Cognos) is a plus
  • Strong development discipline and adherence to best practices and standards.
  • Ability to manage multiple priorities and projects coupled with the flexibility to quickly adapt to ever-evolving business needs
  • Demonstrated independent problem solving skills and ability to develop solutions to complex analytical/data-driven problems
  • Must be able to communicate complex issues in a crisp and concise fashion to multiple levels of management
  • Excellent interpersonal skills necessary to work effectively with colleagues at various levels of the organization and across multiple locations
  • Financial Services and Commercial banking experience is a plus

Responsibilities:

  • Acquire data from primary or secondary data sources
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Transforming existing ETL logic into Hadoop Platform
  • Innovate new ways of managing, transforming and validating data
  • Establish and enforce guidelines to ensure consistency, quality and completeness of data assets
  • Apply quality assurance best practices to all work products
  • Analyze, design and code business-related solutions, as well as core architectural changes, using an Agile programming approach resulting in software delivered on time and in budget;
  • Experience of working in a development teams, using agile techniques and Object Oriented development and scripting languages, is preferred
  • Comfortable learning cutting edge technologies and applications to greenfield projects
Apply Now    

Join our Talent Community

Not ready to apply? Leave your information with us and we will keep you up to date with new career opportunities.

Other Information

Apply Using LinkedIn

You can also apply using your LinkedIn® profile. It may save you some time because your information will be automatically transferred into our system. Just click on the LinkedIn logo when you get to the application screen and follow the directions.

Submit an Updated Résumé

During the application process, be sure you have an up-to-date copy of your Résumé, your cover letter and any other documentation you would like to submit.