Thursday, October 25, 2018

Oracle on wheels - My DevOps story


I had the privilege to learn and work on some of the DevOps tools like Jenkins & GoCD when I came across FlexDeploy which seems to be perfect based upon my area of work. It supports each component of DevOps loop.


DevOps continuous loop

Today's post will highlight some of the features & attributes on the same. FlexDeploy is a product born out of a startup Flexagon few years back and picked up lot of eyes especially in Oracle space. FlexDeploy offers several plugins associating itself with almost every Oracle product and alleviates pain areas in CI/CD horizon. You can easily build and deploy artifacts across environments based upon your technology of choice. For the interest of the blog, I'll stick to only Oracle BI domain.

I have explored specifically the ODI, OBIEE & Oracle DB Plugin. I'll write each in briefly but before let me explain how FlexDeploy talks with each environment viz. Development, Testing, Production etc.

FlexDeploy uses ssh connectivity with each environment - it puts the project specific binaries in that environment and executes them within, whether the task is to export the artifacts or to deploy them. Below diagram shows more in depth architecture:

FlexDeploy Architecture

ODI
ODI plugin within FlexDeploy exports the scenarios from one environment (generally build env i.e. Dev, Test) and imports them into your environment of choice (Test, Prod). It supports both ODI 11g & 12c and uses marker & regex based approach to filter out the scenarios that needs to be exported. Once exported in the artifact directory (of course, you can push them to VCS repositories as well viz. SVN, GIT) these can then be imported in the target systems just by using ssh connection.
Bonus point: I have extended this plugin to support generating of data lineage along with several bug fixes. You can request your copy from the comments section below.

OBIEE
OBIEE plugin is interesting in terms that it supports partial deployments. You can migrate all the webcatalog objects or you have a choice to select and migrate only few. It supports RPD migration as well which supports changing of connection pools programmatically based upon the target instance. After deployment, it may restart the OBIEE services for you. Again it is well suited for both OBIEE 11g & 12c.

Oracle DB
Oracle DB plugin is richer in terms of functionality and support. It creates the baseline of your database in each environment and compares that baseline for any discrepancies in metadata. You can synchronize these environments on the go or schedule them.

The above information highlights the tool capabilities succinctly and more information can be found from here.

If you have any questions pertaining to the tool, I'll be happy to answer.

Read More »

Friday, August 10, 2018

ODI 12c Data Lineage Tool

Hi Guys,

After Informatica to ODI Automation tool, I am happy to present ODI12c documentation tool that connects with your work repository and generates documentation on the selected mappings.
This is not it, I am releasing the tool for public download. So, if you want the copy, please get in touch.

Select ODI Instance to connect:














 Select mappings to generate data lineage:

Output File:



Read More »

Wednesday, June 6, 2018

Streaming Analytics through OBIEE

Hi Guys,

I am back with a new BI piece and this time its on a big data environment. This is not a tool but is amalgamation of multiple technologies to provide seamless BI experience in real time and to take business decisions quickly.

Many companies today is looking for alternatives to store huge data and gather insights quickly rather than waiting for regular BI jobs to get completed and then see the reports. Also, moving entirely to a completely different stack is also not possible as the business users got already accustomed and comfortable with the existing reporting tool. Also, what about the huge investments company made on the Oracle products? 😉

To answer these, I have created a demo that reflects how we can leverage the existing technology and combine it with big data products at the backend to provide seamless experience to the end users. Also, the demo will show how we can process the data in real time and visualize it live in obiee.


The entire data flow works as follow:

The data is fed from multiple sources systems/applications to kafka topics which passes it to the apache spark. Now apache spark processes the data and open two streams for data population.



First stream pushes the data to the data warehouse, in our case its kudu db maintained on hadoop cluster. Once the data is received in kudu, we can further transform/aggregates it or use predictive analytics, which will finally get visualized in obiee through rpd using apache impala as the odbc connector.



The second stream pushes back the data from spark to kafka in a different topic. Now this evenly timed data is sent to node server which is responsible of sending this data to each client window with the help of socketio. Finally, the visualization on the data happens with the help of highcharts.


Some OBIEE sample reports :-






more info @
https://www.linkedin.com/pulse/streaming-analytics-obiee-parikshit-agarwal/


Read More »