Code Project Overview
This open source code project delivers a simple metadata driven processing framework for Azure Data Factory (ADF). The framework is made possible by coupling ADF with an Azure SQL Database that houses execution stage and pipeline information that is later called using an Azure Functions App. The parent/child metadata structure firstly allows stages of dependencies to be executed in sequence. Then secondly, all pipelines within a stage to be executed in parallel offering scaled out control flows where no inter-dependencies exist.
The framework is designed to integrate with any existing Data Factory solution by making the lowest level executor a stand alone Worker pipeline that is wrapped in a higher level of controlled (sequential) dependencies. This level of abstraction means operationally nothing about the monitoring of orchestration processes is hidden in multiple levels of dynamic activity calls. Instead, everything from the processing pipeline doing the work (the Worker) can be inspected using out-of-the-box ADF features.
This framework can also be used in any Azure Tenant and allow the creation of complex control flows across multiple Data Factory resources by connecting Service Principal details through metadata to targeted Subscriptions > Resource Groups > Data Factory’s and Pipelines, this offers very granular administration over data processing components in a given environment.
Framework Key Features
- Granular metadata control.
- Metadata integrity checking.
- Global properties.
- Complete pipeline dependency chains.
- Execution restart-ability.
- Parallel execution.
- Full execution and error logs.
- Operational dashboards.
- Low cost orchestration.
- Disconnection between framework and Worker pipelines.
- Cross Data Factory control flows.
- Pipeline parameter support.
- Simple troubleshooting.
- Easy deployment.
- Email alerting.
- Automated testing.
- Azure Key Vault integration.
Thank you for visiting, details on the latest framework release can be found below.
Version 1.8.4 of ADF.procfwk is ready!
Another quick minor release to keep things moving along in the backlog.
The main purpose for this release was (as the title suggestions) to better organise the metadata database objects. Previously, everything for the processing framework lived within the same database schema ‘procfwk’. Now as the project has evolved it was time to separate things out for ease of use, understanding and so specific areas of development could follow there respective paths.
Another bug reported by the community, thanks Pedro Fiadeiro for this feedback.
This has been fixed in this release of the processing framework, with updates to the following stored procedures:
The existing database schema ‘procfwk‘ will now be dedicated to framework runtime operations, or to define it another way, anything required by Data Factory during an execution.
Given the above, the database now has three new schema’s, as follows:
- procfwkHelpers – these are objects not used at runtime, but exist to support users when modifying the metadata, including post deployment default values.
- procfwkReporting – these objects (mostly views) are used by Power BI when delivering a dashboard of framework executions.
- procfwkTesting – these objects will support the NUnit testing of the framework with scripts and procedures dedicated to test setup and tear down.
In all cases, where legacy objects have been transferred to the new schemas database Synonyms have been created to ensure backwards compatibility. In addition, all object transfers, drops and creations are handled by a set of pre and post deployment scripts.
A few new stored procedures have been added to harden the solution and improve deployment/test handling. These high level changes are:
- New helper procedures have been added to empty all database metadata tables.
- All post deployment metadata has been refactored into a series of
[procfwkHelper].[SetDefaultXXXXX]stored procedures that now include idempotent handling where applicable.
- New testing procedures have been added to wrap up database clear down and setup of default metadata. These will be used as part of the NUnit testing Tear Down methods.
That concludes the release notes for this version of ADF.procfwk.
Please reach out if you have any questions or want help updating your implementation from the previous release.