Apply Now    

Senior Product Engineer - Infrastructure Development

Req #: 180006685
Location: Lewisville, TX, US
Job Category: Technology
Job Description:
As an experienced Infrastructure Development professional, your love of technology will have a direct impact on the future of the business. As a senior member of a high-performance team, you’ll be immersed in all the elements of Software Development Lifecycles- design, development, integration, operation, support and testing of infrastructure services. You’ll ensure team goals are met and best practices, architectural design standards, data, risk and security management policies are adhered to. You’ll be instrumental in designing, developing and testing code, solving more difficult technical issues, developing integration elements, building data models, APIs, and open 3rd-party SDKs. You’ll see your ideas come to life as part of a small, success-driven team. And as part of JPMorgan Chase & Co.’s global technology community, you’ll also have the ability to collaborate with peers around the world to tackle big challenges.
The Core Foundation Services team (CFS) is responsible for providing end to end support for critical technologies that are used across the company. This includes Configuration and Orchestration, Identity Management, Name Services, Enterprise Monitoring Solutions, and automation tools used to manage these technologies.
BIFrost platform team within CFS is seeking a Product Management/Strategy/Engineer candidate for implementing Hadoop and messaging with Microservices/API solutions for the firm and platform onboarding responsibilities for other applications.  The candidate will be responsible for scrum master, product management, onboarding, backlog management/sprint management/reporting, implementing and developing solutions with Kafka, Hadoop/Cloudera eco system technologies.  We are working on solution, consists of big data, cutting edge relevance algorithms and methodologies to deliver a high-availability, low-latency service and support service operations and security analytics.
  • Scrum master to deliver the product as planned with sprints in SAFe Agile or Scrum/Sprint methodology
  • Leading onboarding/pipeline track and onboarding new sources into platform with pipeline to deliver it on time/schedule.
  • Managing and grooming product backlog, roadmap, and strategy based on clients and market requirements and other responsibilities includes product pricing and operational requirements.
  • Leads the design and development of medium to large scale complex projects with agile approach and security standards.
  • Leads and participates in proof-of-concept for prototypes & validate ideas,  automating platform installation, configuration and operations processes and tasks (Site reliability engineering) of global events data platform
  • Managing Cyber and Project milestones in addition to project finance, communications objectives, and reporting responsibilities.
  • Executing architect/strategy responsibilities, optimize processes, product capacity requirements & managing it with forecast & analysis,  and contributes to continuous improvement by providing optimized practices, efficiency practices in current core services (platform, and infrastructure) areas
  • Guide development teams on leveraging right framework and implementing in high-available solutions.
  • Bachelor's degree in Computer Science, Information Systems, Math or equivalent training and relevant experience
  • 10+ years of work experience within one or more IT organizations. Prior work experience in architecting solutions, technology engineering and development is plus.
  • 5+ years of advanced Java/Python Development experience (spring boot/python, server-side components preferred)
  • Hadoop ecosystem (HDFS, Hbase, Spark, Zookeeper, Impala, Flume, Parquet, Avro) experience for high volume  based platforms and scalable distributed systems
  • Experience working with data model,  frameworks and open source software,  Restful API design and development, and software design patterns
  • Experience with Agile/Scrum methodologies, FDD (Feature data driven), TDD (Test Driven Development),  Elastic search (ELK),  Automation of SRE for Hadoop technologies, Cloudera, Kerberos, Encryption,  Performance tuning, and CI/CD (Continuous integration & deployment)
  • Capable of full lifecycle development: user requirements, user stories, development with a team and individually, testing and implementation
  • Knowledgeable in technology infrastructure stacks a plus; including: Windows and Linux Operating systems, Network (TCP/IP), Storage, Virtualization, DNS/DHCP, Active Directory/LDAP,  cloud, Source control/Git, ALM tools (Confluence, Jira), API (Swagger, Gateway),  Automation (Ansible/Puppet)
  • Production Implementation Experience in projects with considerable data size and complexity
  • Strong communication and written communications skills with the ability to be highly effective with both technical and business partners.  Ability to operate effectively and independently in a dynamic, fluid environment.
Apply Now    
Link for schema

Join our Talent Community

Not ready to apply? Leave your information with us and we will keep you up to date with new career opportunities.

Other Information

Apply Using LinkedIn

You can also apply using your LinkedIn® profile. It may save you some time because your information will be automatically transferred into our system. Just click on the LinkedIn logo when you get to the application screen and follow the directions.

Submit an Updated Résumé

During the application process, be sure you have an up-to-date copy of your Résumé, your cover letter and any other documentation you would like to submit.