Infrastructure Architect

About Live Objects
Live Objects delivers continuous business process optimizations to the enterprise through AI-driven automation of discovery, design and process engineering by mining patterns in business objects, cases and transactions across all process variations. The product integrates deeply with business process management platforms (like SAP, Salesforce etc.) and delivers continuous process engineering natively as on-demand compositions through the platform’s interfaces. Live Objects’ path-breaking process-calculus engine models predictive, diagnostic and quality related projections for live cases in business processes. Its AI-driven on-demand process reengineering engine composes process enhancements with rule-based associations with business objects for delivering quantified time, process, margin and cost efficiency gains. The on-demand compositions can be subject to review by functional subject matter experts in CRM, ERP, MRP, order-to-cash etc. before deploying them into live business processes. Ongoing client engagements include self-optimizing a wide spectrum of business processes including master data management, order-to-cash, sales distribution and supplychain management. The company is based in Palo Alto and venture funded by The Hive. The Hive is a fund and co-creation studio for AI powered enterprise applications.
About the role
The Infrastructure Architect will drive the design and development of data management infrastructure, DevOps & CI/CD automation, cloud hosting, security and service availability aspects of Live Objects’ products and platforms.
Responsibilities
The successful candidate will be leading the development of key aspects of Live Objects’ product:

  • Design industrial leading solutions to deeply optimize the big data computing and storage setup, to enable new business needs
  • Design and build the infrastructure for data extraction, preparation, and loading of data from a variety of sources
  • Design and implement enterprise infrastructure and platforms required for cloud computing
  • Design, architect, and improve automation tools for systems management, including development of scripts/building tools
  • Perform analysis of best practices and emerging concepts in DevOps, Infrastructure Automation, and system Security
  • Dig deep into hardware, driver, Linux kernel, and TCP stack issues to troubleshoot problems and improve system performance
  • Implement Continuous Delivery solutions and helping customers automate various stages of their deployments and testing processes

About you
The successful candidate will have experience in working in innovative projects with fast-paced
delivery schedules in startups & large enterprises:

  • Proficient understanding of distributed computing principles
  • Experience in managing Hadoop cluster, with all included services
  • Ability to solve any ongoing issues with operating the cluster
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with web technologies and building enterprise architecture roadmaps
  • Experience designing, integrating and managing complex infrastructure solutions
  • Hands-on experience with containers stacks and orchestration like Kubernetes/K8S, Rancher, Docker
  • Expert in the configuration and maintenance of common applications such as Tomcat, Nginx, LDAP, NFS, DHCP, DNS
  • Hands-on expert level experience with at least one of the configuration management tools (Puppet/Ansible/Chef)
  • 8+ years of experience with medium to large-scale Linux production environments
  • BS/MS degree in Computer Science or equivalent experience