Responsible and Ethical AI Policy

Responsible & Ethical AI Policy

Effective Date: 2025-12-30

Last Updated: 2025-12-30

Purpose:

Ashton Advisors LLC is committed to using Artificial Intelligence (AI) responsibly, ethically, and lawfully in ways that advance creativity, innovation, and insight without compromising human judgment, privacy, or fairness. This policy establishes the governance, principles, and operational requirements for all AI use within Ashton Advisors LLC projects.

Scope:

This policy applies to all employees, contractors, and collaborators using AI tools, including but not limited to OpenAI ChatGPT Business, analysis assistants, or other AI-powered platforms used in research, synthesis, or content development.

Core Principles:

  • Lawfulness and Compliance – All AI activities comply with applicable laws and regulations, including GDPR, UK GDPR, CCPA/CPRA, and emerging AI frameworks (EU AI Act, NIST AI RMF, ISO/IEC 23894).

  • Accountability – Human oversight is mandatory. Project Leads remain responsible for all deliverables assisted by AI.

  • Transparency – Clients are informed whenever AI tools contribute to analysis or deliverable creation.

  • Fairness and Non-Discrimination – Ashton Advisors monitors and mitigates bias in AI outputs through peer review, prompt audits, and training.

  • Privacy and Security – Only anonymized or de-identified data may be processed through AI tools. No PII, health records, or confidential client data may be entered.

  • Explainability – All AI-assisted outputs must be traceable to source materials and accompanied by supporting rationale.

  • Human-Centric Design – AI is used to augment human creativity, not replace professional judgment or decision-making.

  • Continuous Improvement – AI governance practices are reviewed quarterly and updated annually to reflect regulatory and technological changes.


Governance Structure:

  • Responsible AI Lead – Oversees implementation, audits, and staff training.

  • Project Leads – Ensure compliance within their projects and validate all AI outputs.

  • Founders – Review and approve annual updates to this policy.

Governance activities include:

  • Quarterly AI Governance Reviews (model updates, bias audits, incident log).

  • Annual Responsible AI Training for all employees and contractors.

  • Maintenance of a Prompt Governance Log documenting templates, model versions, and QA outcomes.

Operational Safeguards:

  • Prompt Design Controls: Use of the Prompt Playbook to standardize structure and minimize bias.

  • Human Review: Two-person peer review of all AI-assisted outputs.

  • Bias and Fairness Checks: Quarterly sample reviews; findings recorded in the Governance log.

  • Privacy Controls: Only anonymized text entered into AI tools; PII and sensitive data prohibited.

  • Security Controls: Use only of SOC 2 Type II / ISO 27001-certified vendors (OpenAI Business, Google Workspace).

  • Incident Response: Immediate containment, documentation, and remediation of any detected AI or data-related risk to team@ashtonstrategies.com

Compliance and Monitoring

Ashton Advisors LLC conducts quarterly audits of AI use and data-handling practices. Non-compliance may result in disciplinary action and notification to affected clients where required.


All incidents, changes, and audit outcomes are logged and reviewed by the Responsible AI Lead and Founders.

Review and Updates

This policy is reviewed annually or upon significant regulatory change. The latest version is available to all staff and provided to enterprise clients upon request.

Contact

For questions regarding this policy or Responsible AI governance, contact:
Responsible AI Lead – Ashton Advisors LLC
Email: team@ashtonstrategies.com