Provide consistent unified access to data across different systems by using the flexible and semantically precise structure of the knowledge graph model
Implement a Connected Inventory of enterprise data assets, based on a knowledge graph, to get business insights about the current status and trends, risk and opportunities, based on a holistic interrelated view of all enterprise assets.
Quick and easy discovery in clinical trials, medical coding of patients’ records, advanced drug safety analytics, knowledge graph powered drug discovery, regulatory intelligence and many more
Make better sense of enterprise data and assets for competitive investment market intelligence, efficient connected inventory management, enhanced regulatory compliance and more
Improve engagement, discoverability and personalized recommendations for Financial and Business Media, Market Intelligence and Investment Information Agencies,Science, Technology and Medicine Publishers, etc.
Connect and improve the insights from your customer, product, delivery, and location data. Gain a deeper understanding of the relationships between products and your consumers’ intent.
a smart and enthusiastic person with data analysis and ETL experience to fill the role of Data Engineer. We value smart, awesome people who want to solve hard problems in a fun, collaborative and casual environment.
Your Role
Your Role will include some/all of the following responsibilities depending on your profile and preferences:
Analyze datasets;
Data modelling;
Creating data pipelines for conversion and integration of various datasets to a RDF output;
Data governance;
Freedom to use different programming languages to fulfill research tasks;
We use the following core technology stack: Apache Spark/RabbitMQ/TalenD/OpenRefine for ETL pipelines; GraphDB a RDF triple store and its query language SPARQL; Elasticsearch for text search and aggregations;
Your Profile
Experience with various kinds of data: CSV, RDBMS, XML and JSON;
University Degree in Information Technologies, Computer or Library Science;
Knowledge of some programming or scripting language, e.g. Java, Scala, Python;
Experience using Version Control Systems;
Aptitude to learn new technologies quickly, gain familiarity with new business domains;
Experience with cluster technologies Kubernetes, Hadoop will be an advantage;
Experience with some ETL tool, eg Talend, OpenRefine, Informatica, SQL DTS will be appreciated;
Fluent in English.
Apply for This Job
Our Offer
Common sense-driven organizational culture, in which shall you have constructive input, your voice will be heard (i.e. chance to make a difference!)