Apply Now    

Big Data Spark/Hadoop Software Engineer

Req #: 180009215
Location: Bangalore East, KA, IN
Job Category: Technology
Job Description:
Job Description -
JPMorgan Chase & Co. (NYSE: JPM) is a leading global financial services firm with assets of $2.6 trillion and operations
worldwide. The firm is a leader in investment banking, financial services for consumers and small business, commercial
banking, financial transaction processing, and asset management. A component of the Dow Jones Industrial Average,
JPMorgan Chase & Co. serves millions of consumers in the United States and many of the world's most prominent
corporate, institutional and government clients under its J.P. Morgan and Chase brands. Information about JPMorgan
Chase & Co. is available at
This role is within the Corporate Technology which is focused on building applications which support the daily needs of risk
managers and financial professionals.
The Corporate Technology (CT) organization develops applications and provides technology support for corporate
functions across JPMorgan Chase, including Global Finance, Corporate Treasury, Risk Management, Human Resources,
Compliance, Legal, and all functions within the Corporate Administrative Office (CAO).
CT teams are aligned with corporate partners’ evolving technology needs and the firm’s ever expanding technology
controls agenda.
A top CT priority is building scalable corporate systems. Teams focus on:
- Responding to the evolving regulatory environment and helping to meet the firm’s regulatory commitments by
addressing internal and external commitments
- Advancing the firm’s Roadmap programs -- Single Sourcing of data, Architecture Convergence, and Rationalization of
- Adopting industry leading technologies to support best-in-class business capabilities for high performance computing and
data storage solutions
- Driving innovation across the firm’s corporate technology portfolio, increasing efficiencies through process automation,
and Agile application development, with an emphasis on user experience and shorter development cycles
- Investing in security & controls for cyber, access/entitlements uplift, data protection and application resiliency
The team is responsible for providing the data backbone to the Finance organization storing all finance information and
calculating aggregations and supporting internal management reporting and external regulatory reporting. The team
leverages Java Open Source and Big Data technologies to provide data management tools, calculators and financial
reporting engines. The team is looking for proactive and hands-on technologists who can solve critical business
problems with innovative technology solutions and have responsibility to implement multiple core components of this
architecture. JPMC is launching a multi-year initiative to invest in the industrialization of this process and seeking highly
qualified candidates to drive this change. Responsibilities of the role include:
- Partner in driving a multi-year strategic initiative that delivers technical solutions which creates a global warehouse that
would consume, model and store finance data. Additionally technical solutions will be required for managing the data
(data management tools) and processing the data (transforms, validations, aggregations) and reporting/accessing the
- Follow SDLC best practices such continuous integration, automated unit test and regression testing, etc and focus on
end to end quality of the delivery.
- Work collaboratively in a team with fellow developers, sharing ideas to solve complex and challenging business
- Be able to communicate effectively and work closely together with business clients, other technology teams, support

partners and stakeholders to deliver and support business aligned solutions.

Basic Technical Qualifications
- 6+ years experience in building out enterprise level applications
- Must have strong hands-on development experience on the following:
- Scala
- Spark
- Hadoop
- HBase
- Java 8
- Strong understanding of internals of Spark and Hadoop e.g. DataFrame, DAG, data partition and distribution, named
node limitations and tuning
- Strong understanding of MapReduce concepts
- Familiarity with Impala a plus
- Bachelor’s degree in Computer Science (or related engineering field) a must
Soft Skills
- Strong work ethic and dedication
- An aptitude and interest in technology
- Highly motivated and interested in following up on technical issues and understanding the functional and technical impact
of any change

- Willingness to take initiative and work independently

Apply Now    
Link for schema

Join our Talent Community

Not ready to apply? Leave your information with us and we will keep you up to date with new career opportunities.

Other Information

Apply Using LinkedIn

You can also apply using your LinkedIn® profile. It may save you some time because your information will be automatically transferred into our system. Just click on the LinkedIn logo when you get to the application screen and follow the directions.

Submit an Updated Résumé

During the application process, be sure you have an up-to-date copy of your Résumé, your cover letter and any other documentation you would like to submit.