Posidex's Customer MDM

Prime MDM lets you align all enterprise data from multiple sources and connects them together so you can operate efficiently and innovate quickly

Our cloud native Customer MDM enables a single, accurate trusted source of master data to address the most complex business data management solutions into a synchronized golden data for confident implementations and decision-making.

As the enterprises grow, the customers begin to interact with the organization across more business channels than ever, leading to their data being stored in silos.

The challenge is to consolidate all interactions of a single customer down to just one record, which is the “golden record” of the customer or single version of truth, and that which is available to consume by all the connected business systems.

An error-free customer master is of paramount importance for the organizations to become agile enterprises. Entity resolution becomes a fundamental aspect of building a customer master. The growing emphasis on regulatory compliance has made the creation and maintenance of accurate and complete master data a business necessity.

Posidex Prime MDM Platform provide our clients to align all enterprise-wide customer data from multiple sources and connects them together so that businesses can operate efficiently and innovate quickly. It provides a 360-degree view of a customer establishing linkages and relationships across products and lines of businesses generates great value to large and complex enterprises.

Our clients now have a game changing ability to manage the customer data in real-time with great accuracy and integrity thereby strengthening the scope of the processes in Customer Onboarding, KYC Due Diligence, Operations, Marketing, Fraud Detection, Risk Management, Compliance and Customer Experience.

Your CUSTOMER'S identity is foundation for all your business.

We ensure it is.

MDM Graphic

Highlights

  • Backed by Powerful entity resolution engines from Posidex - Posidex indigenous products - Prime360 & CLIP - Mathematical Models based on number theory - In memory Analytics
  • Yielding High Recall & High Precision - Key measures required by an efficient Entity resolution engine
  • Tested and validated on Hundreds of Millions of records - Highly scalable
  • Highly Configurable - Flexibility in defining the rules for defining the scope, tolerance & degree of matching
  • Blazing speed - High performance - Can process millions of data in few hours. Quick Response time and high Throughput
  • Lower Hardware sizing compared to many other products
  • Low TCO & High ROI
  • Implemented and Validated by Industry giants across different verticals- Govt & Non Govt / Corporates
  • Database & OS agnostic

Posidex Prime MDM Platform

Posidex Prime MDM Platform Graphic

High Level Features

  • Ability to load data from wide varieties of data formats viz., Flat files, Databases (SQL, NOSQL)
  • Perform Data Profiling - Drawing Data insights- Data discrepancies
  • Data Cleaning & Standardisation - Data enrichment
  • Data Matching for entity resolution, Relationship Discovery - Data Zero data deduplication- Incremental data matching
  • Configure Rules - Matching Rule Profile (MRP), highly configurable to define the scope & tolerance of search, Survivor ship Rule Building
  • Grouping & Cluster formation
  • Comprehensive view of customer / Beneficiary across Schemes / departments / Accounts / Lines of businesses etc.
  • Graphical representation of relationship hierarchy
  • Network analytics - entity level, linking relationship, non-obvious linkages
  • Golden Record building & Updating
  • Data Stewardship - Case Management - Merging & Splitting of Clusters
  • Analytics - Segmentation- Ownership - Gap analysis
  • Reports - Data Governance reports - Custom reports
  • UI for User Access Management (UAM), Admin activities, Rule building etc
  • Integration with GraphDB - Graphical Visualisation
  • Big Data Ready solution and addresses High Availability and Horizontal scalability (HA&HS)
  • Platform independent and neutral to database
  • Platform supports file-based processing
  • Enables Data driven Governance, Otherwise, it would not be possible with typical differences / incomplete data sets across departments / Lines of Businesses in absence of Unique ID across data sets

Components

Components

Data Loading and Data Integration

  • Ability to load data from wide variety of sources like excel, flat files, XML files, relational databases, json data bases, HDFS (Hadoop Distributed File System), Bigdata, Streaming Data(JSON text format).
  • Ability to extend the metadata repository with customer-defined metadata attributes
  • Automated discovery and acquisition of metadata from data sources
  • UI for end-user to facilitate work with metadata
  • Facilities for carrying out custom transformations
  • Ability to split text fields based on delimiters, such as space and commas
  • Would provide extract, transform and load capabilities
  • Physical data model to logical data model mapping and rationalization
  • Simple transformations such as data-type conversions, string manipulations and simple calculations
  • Bulk data extraction and loading
  • Creation and maintenance of data models. Configurable, customizable and extensible, as well as upgradable
  • Connectivity and access data stored in relational DBMS engines (for example, Oracle, IBM DB2, MySQL, and Microsoft SQL Server)
  • Connectivity to message queues, including those provided by application integration middleware products (such as Oracle AQJMS) and standards-based architectures (such as Java Messaging Service)
  • Ability to move data in bulk between data repositories
  • Event-based acquisition (time-based or data-value-based)
  • Execution of data delivery based on event triggers
  • Execution of data delivery in a batch, scheduled mode
  • Domain values of certain attributes captured and masters created for those attributes
  • Support integration with different latency characteristics and styles (for example, real-time and batch)
  • Predefined and customizable approaches for implementing standard error-handling processes
  • Support to accept data for new insertion, updates, partial data augmentation
  • Tools and facilities for monitoring and controlling runtime processes

Data Profiling

  • Ability to carry out data profiling, data quality assessment, determine data anomalies and for metadata discovery
  • Range of prebuilt analyses on individual attributes/columns/fields, such as minimum, maximum, frequency distributions of values and patterns, and others
  • Determine the high frequency values, outliers, seemingly exceptional values
  • Identify the junk, exclude values and generate a list for cleaning
  • Ability to run business rules that check for specific quality issues
  • Packaged processes, including steps used to perform common quality tasks (for example, providing values for incomplete data, resolving conflicts of duplicate records, specifying custom rules for merging records, profiling, auditing and more)
  • Ability to perform parsing operations
  • User interface in which quality processes and issues are exposed to business users, stewards and others
  • Ability to present profiling results in a graphical manner (for example, various chart formats)
  • Ability to present profiling results in textual report format
  • Prebuilt graphical dashboards presenting profiling results (for example Junk values, Out of format PAN, Suspicious DOBs etc)
  • Scheduled execution of profiling processes (via built-in or third-party scheduling functionality)
  • Standard reports for exposing profiling results

Data Cleansing and Standardization

  • Simple transformations, such as data-type conversions, string splitting and concatenation operations
  • Moderate-complexity transformations, such as look-up and replace operations
  • Higher-order transformations, such as sophisticated parsing operations
  • Prebuilt rules for common standardization and cleansing operations, such as formatting addresses or telephone, Common Identifiers like Tax ID numbers
  • Facilities for developing custom transformations and extending packaged transformations
  • Merging fields to achieve completeness
  • Packaged functionality to address specific requirements of customer data quality issues, such as standardizing of names, addresses and telephone numbers, and merging of duplicate customer records
  • Ability to split text fields by matching character strings against packaged knowledge bases of terms, names and more
  • Facilities for adding to, or customizing terms in, packaged knowledge bases, and the ability to create new knowledge bases
  • Validate pincodes using Pincode Data
  • Validate Phone number/Mobile using the standard specification available
  • Regular monitoring and dictionary updates happen in the product and will be passed
  • Extraction and Enrichment of State,District/City,Taluk,Village and Pincode
  • Validation of standard identifiers with specific pattern like PAN and nullify them if invalid
  • Date standardisation
  • City/District standardisation
  • Standardisation of Corporate entities acronyms to expanded
  • Cleaning/standardisation of keywords like Public/private limited etc
  • Clean noise contributing characters, unwanted special characters
  • Clean the exclude values identified from data profiling
  • Extraction/Enrichment can happen in real time as well as in batch mode

Matching and Clustering

  • Matching is based on Posidex Proprietary algorithms CLIP(for Bulk) & Prime360(For Real time) which converts strings to numbers and uses mathematical algorithms to determine the extent of match between the compared attributes
  • Have strong facilities, in batch and real-time mode, for cleansing, matching, identifying, linking reconciling customer master data from different data sources to enable create and maintain the "golden record
  • High Precision & High Recall
  • High Performance
  • The matching is done on all the combinations of attributes defined and thus would address data inadequacies and target high recall
  • Ability to classify and grade the matches into perfect / authentic / System / MPC or Probable / Suggestive / referral / LPC thus targeting high Precision
  • Would take care of data inconsistencies / non uniformity of attribute availability
  • Supports multi-threading
  • Simultaneous running of all matching rules
  • Clustering is the linking of records belonging to the same entity
  • The linking is done to nth degree
  • Undirected weighted graph
  • Dual clustering is supported. Clusters are based on MPC. However, on manual verification, the clusters based on LPC will survive over MPC clusters
  • Ability to extend the clusters by relating those with user-determined properties
  • Network analysis

Data Stewardship and Case Management

  • Support a "data steward" role, enabling it to manage customer data throughout its life cycle and provide data governance
  • UI manual remediation for linking and delinking the customer records with full auditability and survivability
  • Maker Checker facility
  • User Access Management & Role creation
  • Ability to customize the user interface and workflow of the resolution process

API and Integration Channels

  • Supports multi-mode integration
  • Web services interfaces built on SOA environment
  • SOAP & REST services
  • File exchange through sftp
  • Table level integration

Merging and Golden Record Generation

  • Golden record is single source of truth derived from multiple source systems within the ecosystem
  • The golden record is casted based on the survivorship rules
  • The golden record is based on MPC clusters (Most Probable Clusters)
  • The golden record will get recast due to the incremental data
  • Hand off file generation to share the Golden record information with source systems

Matching Rule Configuration and Survivorship rule building

  • UI for building matching rules
  • Provision for multiple Matching Rule Profiles (MRP) and option to choose one before submitting a request. MRP will constitute multiple rules with 'OR' relation
  • Matching Rules support AND/OR operations between the attributes
  • Provision to apply an attribute optional. Match if available, else treat as match. Match with 'NULL', accept NULL input
  • Multi value parameters can be applied for criss cross matching or match specific types
  • Tolerance of matching set for each attribute. Even DOB,Contact No, Identifier can be searched for approximate match
  • The tolerance of matching for an attribute can differ from rule to rule
  • Ability to search on complete data or subset of data (Confinement)
  • The confinement can be done at rule level or at the MRP level to apply for all rules
  • The confinement settings can be made while we build the rule or defer it to apply at run time while the request is posted
  • Certain attributes can be set as Residual attributes. The residual attributes while will not participate in matching but can help in assessing the confidence of match
  • Weightages can be assigned to attributes which would help in arriving at the match score
  • The results can be classified and labelled into different buckets based on business rules
  • The results can be graded for match quality. The grading is done for each class
  • The results can be ranked to display the best match on the top. The lower the rank the better the quality
  • Provision to maintain log of rule creation
  • UI for defining the Survivorship rules
  • An attribute can assume a value based on the survivorship rules. It can be based on the source of the attribute, the ageing(timestamp) viz., the latest surviving over the older, the longest, Max,min,average etc
  • The rules to assign preference to the most-dependable sources

Reports

  • MIS reports
  • Data governance reports
  • Data matching statistical reports

Deployment and Infrastructure

  • Ability to deploy the run-time via cloud-based infrastructure such as Amazon EC2 and Microsoft Azure
  • Hosted, off-premises software deployment (SaaS model)
  • Support for deployment in Linux environments
  • Support for deployment in IBM infra
  • Support for deployment in Solaris
  • Support for deployment in Unix-based environments
  • Support for deployment in virtualized server environments
  • Support for deployment in Windows environments
  • Support for deployment in Wintel environments
  • Support for shared, virtualized implementations
  • Traditional on-premises (at customer's site) installation and deployment of software
  • Support for HA & HS

Built with

  • Prime360 V2.2 (Real Time Search and Matching Engine) with Relationship Discovery Module (for identifying obvious and non-obvious linkages between records)
  • Clip V2.0 (Creation of Golden Records and Unique Customer Identification) With RCA