Data Aggregation &
De-Identification Policy
Table of Contents
-
Introduction and Purpose
-
Scope and Application
-
Definitions
-
Our Commitment to Privacy
-
Types of Data Processing
-
De-Identification Standards and Techniques
-
Aggregation Methodologies
-
Use Cases for Aggregated Data
-
Governance and Oversight
-
Technical and Organizational Safeguards
-
Data Quality and Validation
-
Transparency and Disclosure
-
Third-Party Sharing and Commercial Use
-
Rights and Limitations
-
Compliance and Legal Framework
-
Review and Updates
-
Contact Information
Contact Information
1. Introduction and Purpose
1.1 Purpose of This Policy
This Data Aggregation and De-Identification Policy ("Policy") establishes the standards, processes, and safeguards MoneyMind Profile Pty Ltd ABN 33 672 152 073 ("MoneyMind Profile," "we," "us," or "our") employs when creating, using, and sharing aggregated and de-identified data derived from our Services.
​
1.2 Our Commitment
MoneyMind Profile is committed to:
-
Privacy by Design: Building privacy protections into our data aggregation processes from the outset
-
Data Minimization: Collecting and processing only the data necessary for legitimate purposes
-
Transparency: Being clear about what data we aggregate, how we use it, and with whom we share it
-
Strong De-Identification: Applying rigorous technical measures to prevent re-identification
-
Continuous Improvement: Regularly reviewing and enhancing our practices to reflect evolving privacy standards
1.3 Strategic Value
Aggregated and de-identified data provides significant value to:
-
Our Business: Enabling product improvements, research, and innovation
-
The Industry: Contributing to better understanding of financial behaviour and risk profiling
-
Society: Advancing financial literacy and evidence-based policy development
-
Our Customers: Providing benchmarking insights and industry comparisons
-
This value must be balanced against the fundamental right to privacy, which this Policy is designed to protect.
​
2. Scope and Application
2.1 What This Policy Covers
This Policy applies to all aggregated and de-identified data created from:
-
End-User Data: Personal information about clients of Subscribing Organizations (financial advisers' clients) that is processed through our Services
-
Customer Data: Information about our Subscribing Organizations and their authorized users
-
Usage Data: Information about how the Services are used
-
Generated Data: Outputs, analyses, and insights created through our Services
2.2 What This Policy Does NOT Cover
This Policy does not govern:
-
Personal Information in Identifiable Form: Governed by our Privacy Policy and Data Processing Agreement
-
Confidential Business Information: Governed by confidentiality agreements
-
Internal Operations Data: Not intended for external use or commercialization
2.3 Relationship to Other Policies
This Policy should be read in conjunction with:
-
MoneyMind Profile Privacy Policy
-
Terms of Use (Section 12: Data Aggregation)
-
Information Security Policy
In the event of any conflict, the Terms of Use and Data Processing Agreement shall prevail, except where this Policy imposes more stringent privacy protections.
​
2.4 Geographic Scope
This Policy applies to data aggregation activities in all jurisdictions where we operate:
Australia: Complies with the Privacy Act 1988 (Cth) and Australian Privacy Principles
United Kingdom: Complies with UK GDPR and Data Protection Act 2018
United States: Complies with CCPA/CPRA and applicable state privacy laws
3. Definitions
"Aggregated Data" means data that has been combined from multiple sources or individuals and presented in summary form such that individual data subjects cannot be identified. Aggregated Data is considered non-personal information under applicable privacy laws.
​
"Anonymization" means the process of irreversibly transforming Personal Information such that individuals can no longer be identified, directly or indirectly, by any means reasonably likely to be used.
"Data Controller" (or "Business") means an entity that determines the purposes and means of processing Personal Information.
"Data Minimization" means the principle of collecting and processing only the Personal Information that is adequate, relevant, and limited to what is necessary for specified purposes.
"De-Identified Data" means data from which all direct identifiers have been removed and to which technical safeguards have been applied to prevent re-identification. De-identified data may include pseudonymized data.
"Direct Identifier" means any data element that directly identifies an individual, including but not limited to: full name, email address, phone number, physical address, Social Security number, driver's license number, account number, IP address (in some contexts), device identifiers linked to personal information.
"End-User" means a client of a Subscribing Organization whose Personal Information is processed through our Services.
"Indirect Identifier" (or "Quasi-Identifier") means data that, when combined with other data, could potentially identify an individual, such as: age, gender, occupation, geographic region, dates (birth, transaction, activity), demographic characteristics.
"Personal Information" (or "Personal Data") means any information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with an identified or identifiable natural person.
"Pseudonymization" means the processing of Personal Information such that it can no longer be attributed to a specific individual without the use of additional information (the "key"), which is kept separately and subject to technical and organizational measures to prevent re-identification.
"Re-Identification" means the process of matching de-identified or pseudonymized data back to the specific individual to whom it relates.
"Sensitive Personal Information" includes Personal Information revealing: racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, health information, sex life or sexual orientation, and in some jurisdictions, financial account information, Social Security numbers, and precise geolocation.
"Services" means MoneyMind Profile software, applications, tools, and related services.
"Statistical Disclosure Control" means techniques applied to data to prevent the disclosure of information about individuals while preserving data utility.
"Subscribing Organization" means a financial advisory firm, wealth management company, or individual financial adviser that uses our Services to profile and serve their clients.
4. Our Commitment to Privacy
4.1 Privacy-First Approach
We design our data aggregation processes with privacy as a foundational principle, following the framework of "Privacy by Design and by Default":
Privacy by Design:
-
Privacy protections are embedded into our systems and processes from the outset
-
We anticipate privacy risks and build safeguards proactively
-
Privacy is a core business requirement, not an afterthought
Privacy by Default:
-
The strictest privacy settings apply automatically
-
No action is required by individuals to protect their privacy
-
Only necessary data is processed for each specific purpose
4.2 Data Minimization Principle
We adhere strictly to data minimization:
-
We collect only the minimum data necessary for aggregation purposes
-
We retain source data only as long as necessary for de-identification and aggregation
-
We delete source Personal Information promptly after aggregation is complete
-
We limit the granularity of aggregated data to what is necessary for its intended purpose
4.3 Purpose Limitation
Aggregated data is used only for the purposes disclosed in:
-
This Policy
-
Our Privacy Policy
-
Our Terms of Use (Section 12)
-
Specific disclosures to Subscribing Organizations
We do not use aggregated data for purposes incompatible with these disclosures without obtaining appropriate consent or providing additional notice.
​
4.4 Accountability and Responsibility
We take full responsibility for:
-
Ensuring de-identification techniques are effective
-
Preventing re-identification of individuals
-
Maintaining the security of aggregation processes
-
Training personnel on privacy requirements
-
Conducting regular audits and assessments
5. Types of Data Processing
5.1 When We Act as Data Controller
We are the data controller (business) for aggregated data created from:
-
Customer usage and activity data (how Subscribing Organizations use our Services)
-
Aggregate industry research conducted with explicit participant consent
In these scenarios, we determine the purposes and means of aggregation and are responsible for compliance with applicable privacy laws.
​
5.2 When We Act as Data Processor
When Subscribing Organizations use our Services to profile their clients, we act as a data processor (service provider) for End-User Personal Information. In this role:
Limited Aggregation Rights:
-
We may aggregate End-User data only as permitted by our Data Processing Agreement
-
Subscribing Organizations retain primary control over their clients' data
-
We may create aggregated insights only after applying rigorous de-identification
-
Aggregation is conducted in accordance with Section 12 of our Terms of Use
Contractual Authorization: Our Terms of Use (Section 12.1–12.3) provide contractual authorization from Subscribing Organizations to:
-
Aggregate End-User data across multiple Subscribing Organizations
-
Use de-identified aggregated data for service improvement, research, and product development
-
Share aggregated data with third parties (subject to this Policy's protections)
5.3 Source Data Categories
We aggregate data from the following categories:
​
Behaviour Profile Data:
-
Responses to behaviour profiling questionnaires
-
Financial personality assessments
-
Behavioral traits and tendencies
-
Decision-making patterns
Risk Profile Data:
-
Risk capacity assessments
-
Risk tolerance scores
-
Financial goals and risk need
Demographic Data:
-
Age ranges
-
Geographic regions (country, state, metropolitan area)
-
Occupation categories
-
Income bands
-
Expense bands
-
Asset ranges
-
Liability ranges
-
Life stage indicators
Financial Data:
-
Portfolio asset allocation data (aggregated)
-
Asset class preferences
Usage and Interaction Data:
-
Feature utilization patterns
-
Questionnaire completion rates
-
Report generation frequency
-
Time spent on assessments
-
User interaction patterns
Important: We do NOT include in aggregated data:
-
Full names, email addresses, or contact information
-
Specific account numbers or financial account identifiers
-
Social Security numbers or government identifiers
-
Precise geolocation data
-
Health information (unless appropriately de-identified under HIPAA standards for research)
-
Any data that would allow individual identification
​
In the Software we do NOT capture social security numbers, tax file numbers, national insurance numbers, driver's license numbers and passport numbers, financial account numbers (bank account, investment account, or credit card numbers), vehicle identification numbers (VINs).
6. De-Identification Standards and Techniques
6.1 De-Identification Framework
We employ a multi-layered de-identification framework based on industry best practices from organizations like Apple, Salesforce, and leading privacy research institutions.
Our approach follows the "De-Identification Triangle" principle:
-
Remove Direct Identifiers: Strip all data elements that directly identify individuals
-
Generalize Indirect Identifiers: Transform quasi-identifiers to reduce specificity
6.2 Removal of Direct Identifiers
Before any aggregation, we remove all direct identifiers, including:
-
Full names (first, middle, last)
-
Email addresses
-
Phone numbers (mobile, landline)
-
Physical addresses (street, unit number)
-
Account identifiers and user IDs
-
IP addresses (full or partial)
-
Device identifiers (MAC addresses, IMEI, advertising IDs)
-
Biometric data (fingerprints, facial recognition data)
-
Web cookies or persistent identifiers linked to personal information
-
Any other unique identifiers that could directly identify individuals
Technical Implementation:​
-
Mandatory deletion before data enters aggregation processes
-
Audit logs to verify complete removal
6.3 Generalization of Indirect Identifiers
Indirect identifiers (quasi-identifiers) are generalized to prevent re-identification through combination:
Age:
-
Specific ages → Age ranges (e.g., 25-34, 35-44, 45-54)
-
Ranges selected to ensure k-anonymity thresholds
Geographic Data:
-
Specific addresses → Metropolitan area, state, or region
-
Postal codes → Partial postal codes (first 3 digits) or geographic regions
-
City-level data only for cities with population > 100,000
-
Country-level aggregation as default for international data
Dates:
-
Specific dates of birth → Month and year, or year only
-
Transaction dates → Week, month, or quarter
-
Temporal rounding to obscure precise timing
Income and Asset Data:
-
Specific values → Broad bands (e.g., $50,000-$100,000, $100,000-$200,000)
-
Band widths selected to ensure sufficient population in each band
-
Top-coding applied to highest values to prevent identification of high-net-worth individuals
Financial Data:
-
Portfolio asset allocation → allocations or value ranges
-
Specific product names → Product categories
6.4 Advanced De-Identification Techniques
The below techniques may be used depending on the use case and data sets being aggregated.
Noise Addition (Differential Privacy):
-
Random statistical noise added to datasets to prevent inference attacks
-
Calibrated to maintain data utility while protecting privacy
-
Ensures that inclusion or exclusion of any single individual does not significantly affect results
Data Suppression:
-
Rare or unique combinations of attributes are suppressed entirely
-
Outliers removed to prevent identification of exceptional cases
Data Swapping:
-
Values of sensitive attributes swapped between records to break linkages
-
Applied selectively to maintain overall statistical properties
Rounding and Binning:
-
Continuous variables rounded to reduce precision
-
Values grouped into bins or categories
-
Applied consistently across all records
Pseudonymization vs. Anonymization
Pseudonymization:
-
Used when we need to track entities over time without identifying them
-
Random identifiers replace direct identifiers
-
Keys kept separately under strict security controls
-
Pseudonymized data is still considered Personal Information under GDPR/UK GDPR
-
Used only for intermediate processing; final aggregated outputs are fully anonymized
Anonymization:
-
Irreversible process; no key exists to re-identify individuals
-
Aggregated data in its final form is anonymized
-
No longer considered Personal Information under applicable laws
-
Cannot be reversed using any reasonable means
7. Aggregation Methodologies
7.1 Aggregation Process Overview
Our aggregation process follows a rigorous multi-stage workflow:
Stage 1: Data Extraction
-
Source data identified based on aggregation purpose
-
Minimal necessary data extracted from production systems
-
Extraction logged and audited
Stage 2: Pre-Processing
-
Direct identifiers removed immediately upon extraction
-
Data quality checks performed
-
Missing or invalid data flagged
Stage 3: De-Identification
-
Techniques from Section 6 applied systematically
-
Automated and manual checks for residual identifiers
Stage 4: Aggregation
-
Statistical aggregation performed (counts, means, medians, distributions)
-
Thresholds applied (see Section 8)
Stage 5: Validation
-
Re-identification risk assessment
-
Data utility verified
-
Compliance checks performed
Stage 6: Approval and Release
-
Privacy Officer, Chief Technical Officer, or designated reviewer approves
-
Aggregated data moved to approved repository
-
Source Personal Information deleted (unless required for legal retention)
7.2 Statistical Aggregation Methods
Descriptive Statistics:
-
Counts and frequencies
-
Means, medians, and modes
-
Standard deviations and variances
-
Percentiles and quartiles
-
Confidence intervals
Trend Analysis:
-
Time-series aggregates (monthly, quarterly, annually)
Benchmarking:
-
Industry averages and percentiles
-
Peer group comparisons
-
Normalized scores and indices
Segmentation:
-
Cluster analysis to identify groups with similar characteristics
-
Segment profiles based on aggregated attributes
Modeling and Inference:
-
Models using aggregated data
-
Predictive models trained on de-identified datasets
-
Model outputs validated to ensure no individual identification
7.3 Geographic Aggregation
-
Regions/States/Postal/ZIP Codes: Default geographic unit for most purposes
-
Countries: Used for international comparisons
7.4 Cohort Analysis
We may create cohorts (groups) for analysis
-
Cohorts defined by shared characteristics (e.g., "self-control or optimism behavioral characteristics")
-
Tracking at cohort level only; no individual tracking
-
Cohort definitions broad enough to prevent identification
8. Use Cases for Aggregated Data
8.1 Internal Business Uses
Product Development and Improvement:
-
Identifying features that are most/least used
-
Understanding user needs and pain points
-
Developing new tools, features, and functionalities
-
Optimizing user experience and workflows
Research and Innovation:
-
Conducting research into financial behavior and decision-making
-
Developing improved risk profiling
-
Creating industry insights and thought leadership
-
Contributing to academic and practitioner research
Quality Assurance and Performance Monitoring:
-
Monitoring system performance and reliability
-
Identifying and resolving technical issues
-
Benchmarking service delivery
-
Ensuring compliance with service level agreements
Business Analytics and Reporting:
-
Internal reporting on product usage and engagement
-
Board and investor reporting
-
Financial planning and forecasting
-
Strategic decision-making
8.2 Customer-Facing Uses
Benchmarking and Insights:
-
Providing Subscribing Organizations with industry benchmarks
-
Showing how their clients compare to peer groups
-
Delivering insights to improve business practices
-
Offering context for individual client profiles and cohorts
Training and Education:
-
Educating financial advisers on behavior profiling best practices
-
Providing case studies and examples (fully anonymized)
-
Supporting professional development
Marketing and Thought Leadership:
-
Publishing industry reports and white papers
-
Presenting at conferences and webinars
-
Demonstrating product value through aggregated results
-
Building brand authority
8.3 Third-Party Commercial Uses
Subject to the safeguards in Section 13, we may commercialize aggregated data through:
Licensing to Research Institutions:
-
Academic researchers studying financial behaviour
-
Industry research organizations
-
Think tanks and policy institutes
Licensing to Financial Services Firms:
-
Financial advisory providers
-
Investment managers seeking market insights
-
Financial product providers
Licensing to Data Analytics Companies:
-
Firms providing market intelligence
-
Business intelligence platforms
-
Consulting firms advising financial services clients
Media and Publishers:
-
Providing data for news articles and industry publications
-
Supporting journalism and public discourse
-
Educational and non-profit uses
Important: All third-party licenses include contractual prohibitions on:
-
Attempting to re-identify individuals
-
Combining data with other sources to identify individuals
-
Using data in ways inconsistent with this Policy
-
Further distribution without approval
8.4 Prohibited Uses
We do NOT use aggregated data for:
-
Discriminatory Purposes: Making decisions that discriminate based on protected characteristics
-
Targeting Individuals: Creating profiles of or marketing to specific individuals
-
Employment Decisions: Hiring, firing, or promoting based on aggregated data
-
Surveillance: Monitoring or tracking specific individuals
-
Harmful Purposes: Any purpose that could harm individuals or groups
9. Governance and Oversight
9.1 Privacy Officer Responsibility
Our Privacy Officer has overall responsibility for:
-
Implementing and maintaining this Policy
-
Approving aggregation projects and methodologies
-
Conducting or overseeing re-identification risk assessments
-
Reviewing third-party data sharing agreements
-
Investigating privacy incidents related to aggregated data
-
Reporting to senior management on compliance
9.2 Data Aggregation Review Committee
For significant aggregation projects, we may convene a Data Aggregation Review Committee comprising:
-
Privacy Officer (Chair)
-
Chief Technology Officer or delegate
-
Data Scientist or Analytics Lead
-
Legal Counsel or Compliance Officer
Committee Responsibilities:
-
Reviewing proposed aggregation projects
-
Assessing privacy risks and benefits
-
Approving methodologies and techniques
-
Monitoring ongoing aggregation activities
-
Recommending policy updates
9.3 Approval Process
Standard Aggregation (Internal Use, Low Risk):
-
Documented by Data Analyst or Data Scientist
-
Reviewed and approved by Chief Technology Officer and Privacy Officer
-
Quarterly reporting to management
Enhanced Aggregation (Commercial Use, Third-Party Sharing):
-
Formal proposal submitted to Data Aggregation Review Committee
-
Risk assessment conducted and documented
-
Committee approval required before proceeding
-
Annual review of ongoing commercial uses
High-Risk or Sensitive Data:
-
Senior management approval required
-
External privacy expert consultation may be required
-
Enhanced monitoring and auditing
9.4 Documentation Requirements
All aggregation activities must be documented, including:
-
Purpose: Why the aggregation is being conducted
-
Data Sources: What Personal Information is being aggregated
-
Methodology: De-identification and aggregation techniques used
-
Risk Assessment: Re-identification risk evaluation
-
Approval: Evidence of appropriate approval
-
Safeguards: Technical and organizational measures applied
-
Retention: How long source data and aggregated data will be retained
-
Distribution: How aggregated data will be used or shared
-
Documentation retained for audit and compliance purposes for at least 7 years.
​
9.5 Training and Awareness
Personnel involved in data aggregation must complete:
-
Privacy Awareness Training: Annual training on privacy principles and legal requirements
-
De-Identification Training: Specific training on de-identification techniques
-
Role-Based Training: Specialized training for data scientists, analysts, and engineers
-
Policy Training: Training on this Policy and related procedures
-
Training records maintained and reviewed annually.
​
10. Technical and Organizational Safeguards
10.1 Access Controls
Principle of Least Privilege:
-
Access to source Personal Information limited to personnel with legitimate need
-
Role-based access controls enforced
-
Regular access reviews and revocations
Segregation of Duties:
-
Personnel conducting aggregation do not have direct access to identifiable data in production systems
-
Extraction, de-identification, and approval performed by different individuals
-
Peer review required for complex aggregations
Audit Logging:​
-
All aggregation activities logged and monitored
-
Logs reviewed regularly for anomalies
-
Logs retained for at least 3 years
10.2 Secure Environments
Encryption:
-
Data encrypted in transit (TLS 1.2+) and at rest
-
Encryption keys managed according to industry best practices
-
Regular key rotation
Secure Data Disposal:
-
Source Personal Information securely deleted after aggregation (subject to legal retention)
-
Multiple overwrite passes to prevent recovery
-
Certificate of destruction for physical media
10.3 Technical Safeguards in Software
Automated De-Identification:
-
Software tools to automatically detect and remove direct identifiers
-
Algorithms to apply generalization, suppression, and noise addition
Query Controls:​
-
Query logging and monitoring for suspicious patterns
Data Masking:
-
Dynamic data masking for development and testing environments
10.4 Network and Infrastructure Security
Security:
-
Firewalls protecting aggregation systems
-
Intrusion detection and prevention systems
Monitoring:
-
24/7 security monitoring
-
Security information and event management (SIEM)
Vulnerability Management:
-
Security scanning and penetration testing
-
Secure software development lifecycle
​
11. DATA QUALITY AND VALIDATION
11.1 Data Quality Principles
High-quality aggregated data requires high-quality source data:
-
Accuracy: Source data verified and validated before aggregation
-
Completeness: Missing data handled appropriately (imputation, exclusion, or flagging)
-
Consistency: Data definitions and formats standardized across sources
-
Timeliness: Data aggregated from current, relevant time periods
​​
11.2 Validation Procedures
Pre-Aggregation Validation:
-
Data integrity checks
-
Identification and handling of outliers
-
Detection of duplicate records
-
Reconciliation with source systems
Post-Aggregation Validation:​
-
Comparison to historical trends and benchmarks
-
Cross-validation with alternative data sources (where appropriate)
Privacy Validation:
-
Automated scans for residual identifiers
-
Identification risk assessment
-
Compliance with minimum thresholds
11.3 Handling Data Quality Issues
Missing Data:
-
Exclusion of incomplete records (with documentation)
-
Imputation using statistical methods (if appropriate and disclosed)
-
Separate reporting of missing data
Outliers:
-
Statistical methods to identify outliers
-
Removal or capping of extreme values to prevent identification
-
Documentation of outlier handling
Errors:
-
Correction at source where possible
-
Exclusion of erroneous data
-
Documentation of data quality limitations
11.4 Documentation of Limitations
Aggregated data products include documentation of:
-
Data sources and collection methods
-
Time periods
-
Known limitations
-
Exclusions and filters applied
12. TRANSPARENCY AND DISCLOSURE
12.1 Transparency Commitments
We are committed to transparency about our data aggregation practices:
Public Disclosure:
-
This Policy is publicly available on our website
-
Privacy Policy includes clear statement about aggregation practices
-
Terms of Use disclose aggregation to Subscribing Organizations
Subscribing Organization Disclosure:
-
Terms of Use clearly inform Subscribing Organizations of our aggregation practices
-
Privacy Policy includes clear statement about aggregation practices
-
Data Processing Agreement specifies permitted aggregation activities
End-User Notice:
-
Subscribing Organizations are responsible for notifying their clients via MoneyMind Profile's inbuilt Privacy Policy feature
-
End-Users can contact their financial advisor with questions about data use
12.2 What We Disclose
In Our Privacy Policy:
-
General description of aggregation practices
-
Types of data aggregated
-
Purposes for which aggregated data is used
-
Categories of third parties who receive aggregated data
-
Confirmation that aggregated data is de-identified and does not identify individuals
To Subscribing Organizations:
-
Description of aggregation methodologies in this Policy
-
De-identification standards and techniques applied
-
Examples of aggregated data uses
-
List of third-party recipients (if applicable)
-
How aggregated data benefits the Services and industry
To Third-Party Recipients:
-
Contractual prohibitions on re-identification attempts
-
Description of permitted and prohibited uses
-
Technical specifications and data limitations
-
Attribution requirements (if applicable)
12.3 Limitations of Disclosure
We do NOT disclose:
-
Proprietary algorithms and methodologies (trade secrets)
-
Specific technical implementation details that could enable re-identification
-
Details that would compromise the security of our systems
-
Information that would reveal individual Subscribing Organizations' data or business practices
12.4 Requests for Information
Subscribing Organizations may request:
-
Additional information about how their data contributes to aggregated datasets
-
Examples of aggregated data products
-
Confirmation of compliance with this Policy
-
Audit rights as specified in the Master License Agreement
End-Users should direct questions to their financial advisor or Subscribing Organization, as we process their data only on behalf of those organizations.
Regulators may request information about our aggregation practices in accordance with applicable law.
13. THIRD-PARTY SHARING AND COMMERCIAL USE
13.1 Categories of Third-Party Recipients
We may share Aggregated Data with the following categories of third parties:
Research Institutions:
-
Universities and academic researchers studying financial behavior
-
Think tanks and policy research organizations
-
Industry research firms (e.g., market research companies)
Financial Services Firms:
-
Investment managers seeking behavioral insights and market trends
-
Financial product manufacturers (fund managers)
-
Banking and wealth management institutions
-
Financial technology companies
Consulting and Advisory Firms:
-
Management consultants advising financial services clients
-
Strategy and analytics firms
-
Technology consultants and systems integrators
Media and Publishers:
-
Financial news organizations and journalists
-
Industry publications and trade media
-
Authors and content creators (for educational purposes)
Government and Regulatory Bodies:
-
Where required by law or regulation
-
For policy development and research purposes
-
In anonymized, non-confidential form
12.2 Licensing and Commercial Arrangements
Types of Arrangements:
-
One-time data licenses for specific research projects
-
Subscription-based access to aggregated datasets
-
Custom aggregation services for specific client needs
-
Co-development partnerships for research initiatives
Pricing:
-
Aggregated data may be licensed to third parties for fair market value
-
Pricing reflects the value of insights, not the underlying Personal Data
-
Revenue generated supports ongoing development and improvement of the Services
Contractual Protections: All third-party recipients must agree to:
-
Use Aggregated Data only for specified purposes
-
Not attempt to re-identify individuals, Subscribing Organizations, or End-Users
-
Not combine Aggregated Data with other data sources to identify individuals
-
Implement appropriate security measures to protect Aggregated Data
-
Not redistribute Aggregated Data without MoneyMind Profile's consent
-
Acknowledge MoneyMind Profile as the source (where appropriate)
-
Indemnify MoneyMind Profile for any unauthorized use or re-identification attempts
12.3 Prohibited Third-Party Uses
Third parties receiving Aggregated Data are contractually prohibited from:
Re-Identification:
-
Attempting to identify specific individuals, Subscribing Organizations, or End-Users
-
Combining Aggregated Data with other datasets to reverse the de-identification
-
Using statistical inference or matching techniques to identify specific entities
Harmful Uses:
-
Using Aggregated Data to discriminate against individuals or groups
-
Making credit, employment, insurance, or housing decisions based on Aggregated Data
-
Surveillance or monitoring of specific individuals
-
Any use that would harm individuals or violate their rights
Unauthorized Distribution:
-
Selling, licensing, or distributing Aggregated Data to others without consent
-
Publishing Aggregated Data in forms that could enable re-identification
-
Providing access to unauthorized parties
Competitive Use:
-
Using Aggregated Data to develop competing products or services
-
Reverse engineering MoneyMind Profile's methodologies or algorithms
-
Benchmarking for competitive purposes without consent
12.4 Due Diligence and Vetting
Before sharing Aggregated Data with third parties, we conduct due diligence:
Reputation Review:
-
Assessment of recipient's reputation and data protection practices
-
Review of past data handling incidents or violations
-
Verification of legitimate business purpose
Legal and Compliance:
-
Review of recipient's privacy policies and terms
-
Verification of compliance with applicable data protection laws
-
Confirmation of adequate security measures
Contractual Framework:
-
Execution of data license agreement or terms of use
-
Incorporation of use restrictions and prohibitions
-
Establishment of audit and termination rights
Ongoing Monitoring:
-
Periodic review of third-party compliance
-
Investigation of any suspected misuse
-
Termination of access for violations
12.5 Attribution and Citation
When We Require Attribution:
-
Publication of research findings or reports based on Aggregated Data
-
Use of Aggregated Data in public presentations or media
-
Incorporation of insights into third-party products or services
Attribution Format:
"Data provided by MoneyMind Profile. Analysis and conclusions are the author's own and do not represent the views of MoneyMind Profile."
​
When Attribution is Waived:
-
Internal business use not visible to external parties
-
Background research and analysis
-
Competitive intelligence (if permitted under license)
​
13. RIGHTS AND LIMITATIONS
13.1 Individual Rights Regarding Aggregated Data
Important: Once data is properly de-identified and aggregated, it is no longer "Personal Information" under most privacy laws. Individuals therefore have limited rights regarding Aggregated Data.
Before Aggregation (Personal Information): End-Users have full rights under applicable privacy laws (GDPR, CCPA, Privacy Act 1988), including:
-
Right to access their Personal Information
-
Right to correct inaccurate information
-
Right to delete their Personal Information (subject to exceptions)
-
Right to restrict or object to processing
-
Right to data portability
After Aggregation (De-Identified Data): Once data is aggregated and de-identified:
-
It is no longer attributable to specific individuals
-
Individual rights under privacy laws generally do not apply
-
Opt-out or deletion requests cannot "un-aggregate" data already incorporated into Aggregated Data
-
Individuals cannot access or correct Aggregated Data (as it does not identify them)
13.2 Opt-Out Limitations
Prospective Opt-Out:
-
End-Users may request (through their Subscribing Organization) that their future data not be included in aggregation
-
This request will be honored going forward but cannot retroactively remove already-aggregated data
-
Opt-out may limit the functionality or value of the Services to the End-User
No Retrospective Opt-Out:
-
Once Personal Information has been de-identified and aggregated, it cannot be "un-aggregated" or removed
-
Aggregated Data does not contain identifiers linking back to specific individuals
-
Deletion requests apply only to identifiable Personal Information, not Aggregated Data
13.3 Access and Correction Rights
Personal Information (Before Aggregation): End-Users may request access to and correction of their Personal Information through their Subscribing Organization.
Aggregated Data (After Aggregation):
-
End-Users cannot access Aggregated Data about "themselves" because Aggregated Data does not identify specific individuals
-
Aggregated Data reflects statistical patterns across many individuals
-
Correction of one individual's Personal Information would have negligible impact on Aggregated Data
13.4 Deletion Rights and Limitations
Personal Information: End-Users may request deletion of their Personal Information through their Subscribing Organization, it is up to the Subscribing Organization to delete their Personal Information.
​
Aggregated Data: Deletion does NOT apply to Aggregated Data because:
-
Aggregated Data does not identify specific individuals
-
Removing one individual's contribution would not materially change the aggregate
-
De-aggregation is technically infeasible
-
Aggregated Data is owned by MoneyMind Profile
​
Example:
1,000 End-Users contribute risk tolerance data → Aggregated data shows "65% of users aged 45-54 have moderate risk tolerance" → One End-User is deleted → Their identifiable Personal Information is deleted → Aggregated statistic remains (now based on 999 users, result: "65% of users aged 45-54...") → No material change to Aggregated Data → No feasible way to identify and remove that one person's contribution.
13.5 Objection and Restriction Rights
GDPR Right to Object: Under GDPR Article 21, individuals may object to processing based on legitimate interests. However:
-
Aggregation is conducted after de-identification, when data is no longer Personal Data
-
MoneyMind Profile's legitimate interest in aggregation is balanced against privacy through rigorous de-identification
-
Objection rights apply to Personal Information processing, not to use of already-de-identified Aggregated Data
Right to Restriction: Individuals may request restriction of processing of Personal Information, but this does not apply to Aggregated Data that no longer identifies them.
​
13.6 How to Exercise Rights
For End-Users: Contact your financial advisor or the Subscribing Organization that collected your Personal Information, they are responsible for facilitating your privacy rights.
For Subscribing Organizations: Contact info@moneymindprofile.com to request information about aggregation practices.
14. COMPLIANCE AND LEGAL FRAMEWORK
14.1 Applicable Laws and Regulations
This Policy is designed to comply with data protection and privacy laws in all jurisdictions where we operate:
Australia:
-
Privacy Act 1988 (Cth)
-
Australian Privacy Principles (APPs), particularly APP 6 (Use and Disclosure)
-
Notifiable Data Breaches scheme
-
Guidance from the Office of the Australian Information Commissioner (OAIC)
United Kingdom:
-
UK General Data Protection Regulation (UK GDPR)
-
Data Protection Act 2018
-
Guidance from the Information Commissioner's Office (ICO) on anonymization
-
ICO Anonymization Code of Practice
European Union:
-
General Data Protection Regulation (GDPR) 2016/679
-
Article 29 Working Party Opinion 05/2014 on Anonymisation Techniques
-
European Data Protection Board (EDPB) guidance
United States:
-
California Consumer Privacy Act (CCPA) as amended by CPRA
-
Other state privacy laws (Virginia VCDPA, Colorado CPA, Connecticut CTDPA, Utah UCPA)
-
Federal Trade Commission (FTC) guidance on de-identification
-
NIST Special Publication 800-188 on de-identification
Financial Services Regulations:
-
Australian Securities and Investments Commission (ASIC) requirements
-
Financial Conduct Authority (FCA) data protection requirements (UK)
-
SEC and FINRA guidance on data handling (US)
14.2 De-Identification Standards
Our de-identification practices meet or exceed standards established by:
International Standards:
-
ISO/IEC 29100:2011 (Privacy framework)
-
ISO/IEC 29101:2018 (Privacy architecture framework)
-
ISO/IEC 20889:2018 (Privacy enhancing data de-identification techniques)
Regulatory Guidance:
-
ICO Anonymisation Code of Practice (UK)
-
Article 29 Working Party Opinion on Anonymisation (EU)
-
OAIC Guide to De-identification (Australia)
-
NIST SP 800-188 (US)
-
HIPAA Safe Harbor and Expert Determination standards (US healthcare context)
Academic and Industry Standards:
-
k-anonymity, l-diversity, t-closeness (peer-reviewed research)
-
Differential privacy (gold standard for privacy-preserving data analysis)
-
Best practices from organizations like IAPP, Future of Privacy Forum
14.3 When Data is Considered "De-Identified"
Under applicable laws, data is considered de-identified when:
GDPR/UK GDPR (Recital 26): Data is anonymous if "the data subject is not or no longer identifiable" and re-identification is "not reasonably likely to occur" using all means reasonably likely to be used.
CCPA Section 1798.140(h): Data is de-identified if:
-
It cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked to a particular consumer
-
The business has implemented technical safeguards to prohibit re-identification
-
The business has implemented business processes to prohibit re-identification
-
The business does not attempt to re-identify the information
Privacy Act 1988 (Australia): Information is de-identified if it is no longer about an "identifiable individual" - i.e., an individual who is reasonably identifiable.
Our Standard: We apply stringent requirements across all jurisdictions to ensure global compliance.
14.4 Risk-Based Approach
We apply a risk-based approach to de-identification, considering:
Risk Factors:
-
Sensitivity of underlying Personal Information
-
Availability of external data that could enable re-identification
-
Sophistication of potential attackers
-
Consequences of re-identification
Risk Mitigation:
-
Higher-risk data receives stronger de-identification (higher k-anonymity, more noise, greater suppression)
-
Ongoing monitoring of re-identification risks as new data sources emerge
-
Periodic re-assessment of de-identification effectiveness
14.5 Compliance Monitoring
Internal Audits:
-
Annual review of aggregation processes and data products
-
Testing of de-identification effectiveness
-
Verification of compliance
-
Assessment of new re-identification risks
Third-Party Audits:
-
Periodic review by privacy and data protection experts
-
Penetration testing
-
Certification against relevant standards (ISO, SOC 2)
Regulatory Engagement:
-
Cooperation with data protection authorities (OAIC, ICO, DPAs)
-
Response to regulatory inquiries and investigations
-
Implementation of recommendations from regulators
14.6 Accountability and Documentation
We maintain comprehensive records of:
-
Aggregation projects and methodologies
-
De-identification techniques applied
-
Risk assessments and mitigation measures
-
Third-party sharing agreements
-
Complaints and incidents
-
Changes to this Policy and practices
Records are retained for at least 7 years and are available to regulators upon request.
15. REVIEW AND UPDATES
15.1 Policy Review Schedule
This Policy is reviewed and updated:
Annually: Comprehensive review by Privacy Officer and Data Governance Committee of aggregation metrics and compliance:
-
Changes in applicable laws or regulations
-
Regulatory guidance or enforcement actions
-
New re-identification techniques or risks
-
Incidents or complaints
-
Significant changes to aggregation practices
-
Introduction of new data types or uses
15.3 Notification of Changes
Material Changes: When we make material changes to this Policy (e.g., new uses of Aggregated Data, changes to de-identification standards, new third-party sharing arrangements):
(a) We will update this Policy on our website with:
-
Revised "Effective Date"
-
Revised "Last Reviewed" date
(b) We will notify Subscribing Organizations via:
-
Email to primary account contact (at least 30 days before effective date)
-
In-Platform notification upon administrator login
-
Notice in monthly newsletter or product updates
(c) Subscribing Organizations are responsible for:
-
Reviewing changes and assessing impact on their obligations
-
Updating their own privacy policies if necessary
-
Notifying End-Users if required by applicable law
Non-Material Changes: Minor updates (clarifications, corrections, formatting) may be made without notice, provided they do not substantively change Subscribing Organizations' or End-Users' rights.
​
15.4 Feedback and Complaints
We welcome feedback on this Policy and our aggregation practices.
To provide feedback or raise concerns: Email: info@moneymindprofile.com
Subject line: "Data Aggregation Policy Feedback"
We commit to:
-
Acknowledging receipt within 5 Business Days
-
Investigating all complaints thoroughly
-
Responding substantively within 30 days
-
Implementing improvements where appropriate
Complaint Process:
-
Initial Contact: Email info@moneymindprofile.com with details of your concern
-
Acknowledgment: We confirm receipt
-
Investigation: Privacy Officer reviews the complaint and gathers information
-
Response: We provide a written response with findings and any actions taken
-
Escalation: If not satisfied, you may escalate to:
-
Australia: Office of the Australian Information Commissioner (www.oaic.gov.au)
-
UK: Information Commissioner's Office (www.ico.org.uk)
-
US: State Attorney General or FTC (www.ftc.gov)
-
15.5 Continuous Improvement
We are committed to continuously improving our aggregation and de-identification practices by:
Monitoring Best Practices:
-
Tracking developments in privacy-enhancing technologies
-
Participating in industry forums and working groups
-
Engaging with academic researchers and privacy experts
Technology Investments:
-
Implementing new de-identification techniques as they emerge
-
Upgrading systems to support stronger privacy protections
-
Automating compliance and validation processes
Training and Awareness:
-
Regular training for data scientists and analysts on privacy requirements
-
Updates on new re-identification risks and countermeasures
-
Fostering a culture of privacy and data protection
16. CONTACT INFORMATION
For Questions About This Policy:
Attn: Privacy Officer - Data Aggregation
MoneyMind Profile Pty Ltd
Email: info@moneymindprofile.com
​
APPENDIX: GLOSSARY OF KEY TERMS
(Comprehensive definitions are provided in Section 3; this glossary offers quick reference.)
-
Aggregated Data: De-identified data combined from multiple sources, not identifying individuals
-
Controller/Business: Entity determining purposes and means of data processing
-
Data Subject/Consumer: Individual whose Personal Data is processed
-
De-Identified Data: Data stripped of direct identifiers and protected against re-identification
-
Direct Identifier: Data element directly identifying an individual (name, email, SSN, etc.)
-
End-User: Client of a Subscribing Organization whose data may be processed
-
Indirect Identifier/Quasi-Identifier: Data that combined with other data could identify someone (age, ZIP code, etc.)
-
k-Anonymity: Each record is indistinguishable from at least k-1 others
-
Personal Information/Personal Data: Information identifying or identifiable to an individual
-
Processor/Service Provider: Entity processing data on behalf of a Controller
-
Pseudonymization: Replacing identifiers with artificial identifiers (reversible with key)
-
Re-Identification: Matching de-identified data back to specific individuals
-
Sensitive Personal Information: Data revealing race, health, religion, sexual orientation, etc.
-
Statistical Disclosure Control: Techniques preventing disclosure about individuals in aggregated data
-
Subscribing Organization: Financial advisory firm licensing MoneyMind Profile Services
​
​Document Version: 1.0
Effective Date: 30 January 2026​​​​
​
