About Live Objects
Live Objects delivers continuous business process optimizations to the enterprise through AI-driven automation of discovery, design and process engineering by mining patterns in business objects, cases and transactions across all process variations. The product integrates deeply with business process management platforms (like SAP, Salesforce etc.) and delivers continuous process engineering natively as on-demand compositions through the platform’s interfaces. Live Objects’ path-breaking process-calculus engine models predictive, diagnostic and quality related projections for live cases in business processes. Its AI-driven on-demand process reengineering engine composes process enhancements with rule-based associations with business objects for delivering quantified time, process, margin and cost efficiency gains. The on-demand compositions can be subject to review by functional subject matter experts in CRM, ERP, MRP, order-to-cash etc. before deploying them into live business processes. Ongoing client engagements include self-optimizing a wide spectrum of business processes including master data management, order-to-cash, sales distribution and supplychain management. The company is based in Palo Alto and venture funded by The Hive. The Hive is a fund and co-creation studio for AI powered enterprise applications.
About the role
The Backend Data Engineer will drive the design and development of key components of the platform including data processing pipeline, feature extraction and process modeling, intelligent integrations with key business process platforms (like SAP, Salesforce etc.) with live process rules and business object insertions and process risk/conformance modeling.
As a Backend Data Engineer of a fast-growing startup, the successful candidate will be leading the
development of key aspects of Live Objects’ product:
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Perform design, development for new product features.
- Understand product vision and business needs to define product requirements and product architectural solutions.
- Develop architectural and design principles to improve performance, capacity, and scalability of product.
- Create and maintain optimal data pipeline architecture
- Building reusable code and libraries for future use
- Implementation of security and data protection
The successful candidate will have experience in working in innovative projects with fast-paced
delivery schedules in startups & large enterprises:
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of SQL and NoSQL databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Experience in user authentication and authorization between multiple systems, servers, and environments
- Experience in Integration of multiple data sources and databases into one system
- Proficient understanding of code versioning tools, such as Git
- Understanding of “session management” in a distributed server environment
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Scala, etc.
- 10+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field