Dashboards allow us to present data in a variety of creative ways which allow complex information to be analyzed at a glance. Along with charts and tables, maps are an increasingly common format, and with SAP Analytics Cloud it’s never been easier to create them, even without technical training. In this article we’ll look at the process for adding custom maps to SAP Analytics Cloud Stories, using a case study as an example.
Your current SAP BusinessObjects environment is key to your business. However, as many of our customers, you probably are carrying a bit of extra weight with too many licenses that are not used to their full potential. Why not review whether you can optimize your environment and convert some of the unused license maintenance to SAP Analytics Cloud?
If your organisation is running SAP HANA, at some point you’ll most likely need to apply an upgrade to the production environment. Usually that’s either a support package (SP) released in response to a specific bug identified by users (or to address a newly emerged vulnerability); or the annual release of the Support Packages Stack (SPS) upgrade which adds features and enhancements to the existing SAP HANA database version.
During the implementation of Data Science Projects, we always face cases where we have to decide on the best method of implementation in order for it to be integrated with the pipeline smoothly. The goal is to achieve the most simplistic implementation as the overall design is always complex. We focus on to simplifying our approaches as much as possible so we can keep track of all the steps and modify them easily with minimum implementation/modification time.
Some tools can be more productive than others. Throughout our experience in implementing an optimal machine-learning pipeline in production, we have learned to appreciate the raw strength of the combination of SAP HANA with SAP Data Services. The amount of time that can be saved by reformulating the approach and optimizing it to use this combination is significant, compared to a vanilla approach involving usage of Python for data wrangling, cleaning, discovery, and normalization, which are significant aspects of machine learning pipeline development.