E4 2017
 

E4 2017- Speaker Sessions


Title Abstract Presenters
KEYNOTE – DBAs Will Die Another Day Tim Gorman wrote in 2013 a post titled “The DBA is dead. Again”. Tim said Mark Twain’s “Reports of my death are greatly exaggerated”, though apocryphal, echoed the long-anticipated death of the role of the database administrator. This session is about my views on the DBAs fears, doubts and opportunities in the age of Data Lakes, DevOps, Cloud, Big Data, Open Source, Agile, Cloud, Machine Learning, Artificial Intelligence, Bimodal IT, Pizza teams… you name it. The modern data (base) administrator/platform builder/engineer has to navigate between hype, myths and half truths but also breakthrough innovations. Many embrace and transition to the new world. Still many bemoan the decline of a lost world forgetting there were no DBAs 20 years ago. I’ll try to make this session thought provoking and entertaining! Christian Bilien
Analytics as a Business with Exadata and Big Data This session will detail the journey of 84.51° and its integration of Oracle’s Exadata and BDA solutions to drive faster and accurate analytic capabilities. The expectation is that Big Data and Hadoop will offer the same level SLAs when it comes to RPO, RTO, Security, Stability and Performance in comparison to the offerings from other mature Enterprise Relational Database Management Systems in the industry today. This presentation will cover topics such as the Big Data/Exadata architecture at 84.51°, challenges in its implementation, management of capacity and resources (Dynamic Resource Pool), lessons learned, performance tuning exercises, and best practices for managing and productionalizing Hadoop. The session will continue to discuss in depth about the application tuning strategies on Hive and Spark, mainly focusing on configuration parameters, Spark memory management, executor sizing and other tips/best practice in performance tuning on Big Data. A detail illustration of how to use SparkUI to identify Spark bottleneck is also provided. Rashmi Kansakar & Weidong Zhou
Automating Cloud Provisioning: Lessons Learned One of the biggest advantages of the Cloud offering is the ability to get environments up and running in a matter of minutes, instead of probably weeks. You can now provision a servers with terabytes of storage with just a few clicks on a web page, or at least this is the claim. As part of our effort to help clients move to Cloud we automated provisioning at scale, leveraging the APIs each vendor provides. This session focuses on: what we learned during the process, what "few clicks" means in reality, how to programmatically skip such clicks, what seemlessy worked and what gave us some headaches. The main attention is on Oracle and Amazon cloud offering. Mauro Pagano
Big Data and the Multi-model Database Gartner showing in their report, Top 10 Strategic Technology Trends for 2017, interesting things like Artificial Intelligence and Advanced Machine Learning, Intelligent Apps, Intelligent Things and Conversational Systems. All this added to the buzz on Big Data, the business people realize that data, in all forms and sizes, is critical for making the best possible decisions to win the competition over customers. According to Gartner (Magic Quadrant for Data Warehouse and Data Management Solutions for Analytics 2016), “Organizations now require data management solutions for analytics that are capable of managing and processing internal and external data of diverse types in diverse formats, in combination with data from traditional internal sources. Data may even include interaction and observational data — from Internet of Things sensors, for example. This requirement is placing new demands on software in this market as customers are looking for features and functions that represent a significant augmentation of existing enterprise data warehouse strategies.” These needs give a lot of pressure for the technology. One of the problems recognized is the different data models for different types of data but even more problematic is finding a single query language to query all the different data. Usually the user needs several query languages to query from different sources and then a query language to somehow join the data together to get sense of it. One of the solutions is multi-model database. That is a database that stores several data models in one single database allowing the user to query the data using one single query language and joining all data of different data models together. What is this multi-model database and how close is Oracle database to be one? In this session we will talk about the different data models and how well Oracle Database 12.2. supports them. Heli Helskyaho
Building an Integrated Analytics Platform with Oracle and Hadoop This session explores how one organization built an integrated analytics platform by implementing Gluent to offload its Oracle enterprise data warehouse (EDW) data to Hadoop, and to transparently present native Hadoop data back to its EDW. As a result of its efforts, the company is now able to support operational reporting, OLAP, data discovery, predictive analytics, and machine learning from a single scalable platform that provides the benefits of an enterprise data warehouse with those of an analytical data lake. This session includes a brief overview of the platform and use cases to demonstrate how the company has utilized the solution to provide business value. Gerry Moore & Suresh Irukulapati
Exadata: The Road Ahead Databases are the backbone of IT infrastructure and Oracle’s Exadata Database Machine has always had a singular commitment: to be the best platform for running the Oracle Database for any workload. This talk reviews what has been done to meet that commitment through the evolution of Exadata releases, and how that commitment will be maintained for the foreseeable future. We will cover Exadata’s incorporation of state of the art hardware, such as PCI NVMe Flash, Software in Silicon, ultra-fast networking, and upcoming Non-Volatile Memory (NVM), plus Exadata unique software innovations and seamless incorporation of Exadata into Cloud computing. Gurmeet Goindi
Expert Exadata 2nd Edition - 2 Years On It has been almost 2 years since Expert Oracle Exadata (2nd edition) has been published. When we wrote the book we were fully aware that Exadata was a fast moving target and the authors tried to keep up with it in blog posts over time. However, until this conference the authors haven't been able to get together to share their views about what we would have updated had we been writing until now. This talk by Andy Colvin, Frits Hoogland and Martin Bach picks up after the book and shows a selection of new features that came to Exadata since the book went to press. Martin Bach
Frits Hoogland
Andy Colvin
Karl Arao
Offload, Transform, and Present - the New World of Data Integration How much time and effort (and budget) do organizations spend moving data around the enterprise? Unfortunately, quite a lot. These days, ETL developers are tasked with performing the Extract (E) and Load (L), and spending less time on their craft, building Transformations (T). This changes in the new world of data integration. By offloading data from the RDBMS to Hadoop, with the ability to present it back to the relational database, data can be seamlessly integrated between different source and target systems. Transformations occur on data offloaded to Hadoop, using the latest ETL technologies, or in the target database, with a standard ETL-on-RDBMS tool. In this session, we’ll discuss how the new world of data integration will provide focus on transforming data into insightful information by simplifying the data movement process. Michael Rainey
Using Columnar Data Across the Information Lifecycle from Hottest In-Memory to Coldest Parquet Since 11.2, Oracle has been working on end-to-end columnar technology to support columnar processing across the entire Information Lifecycle. In this talk we will look at what each technology offers and how they fit together: where do they fit in the Information Lifecycle; when it makes sense to use each one; how to move data between them and manage them. We'll be covering aspects of : DBIM, CELLMEMORY, HCC on Exadata, and archiving the coldest data on HCC on ZFS and on Parquet on BDS. Roger MacNicol