F.E.A.S.T. AI Policy

F.E.A.S.T. Policy On the Use of Artificial Intelligence

This AI policy outlines our approach to integrating artificial intelligence (AI) technologies into F.E.A.S.T. services ethically and responsibly. It provides guidance to align any future AI initiatives with our mission, values, and commitment to compassionate support of our community.  

Status: F.E.A.S.T. currently uses AI tools for internal operations only. No caregiver-facing AI applications are in use or planned, and any future uses of AI will be governed by the following policy.
 

I. Introduction and Purpose

F.E.A.S.T. is a global community of caregivers dedicated to supporting families and loved ones through the journey of eating disorders. As AI tools become standard in organizational operations, this policy establishes clear boundaries and safeguards for how F.E.A.S.T. uses these technologies.

Our position is straightforward: AI is a tool that may improve operational efficiency, but it will never replace the human connections at the heart of F.E.A.S.T.’s mission. This policy ensures AI use remains consistent with our values of trust, transparency, compassion, and community empowerment.
 

II. Current Scope of AI Use

A. What F.E.A.S.T. May Use AI For

F.E.A.S.T. may use AI tools exclusively for internal operations, including:

  • Drafting and editing internal and external communications (emails, social media posts, website copy)
  • Summarizing documents or meeting notes
  • Generating initial outlines or ideas for educational content
  • Administrative tasks like data organization or formatting
  • Translation drafts (preliminary only; see Section II.C)
     

Critical requirement: Any AI-generated content must always be reviewed, edited, and approved by F.E.A.S.T. staff or volunteers before use. AI outputs are starting points, never final products; there is always human oversight.

B. What F.E.A.S.T. Does NOT Use AI For

F.E.A.S.T. prohibits AI use for:

  • Direct support, counseling, or communication with caregivers or families
  • Automated chatbots or crisis response systems
  • Peer support facilitation or moderation
  • Medical, nutritional, or clinical advice of any kind
  • Automated decision-making about service eligibility, program access, or resource allocation
  • Replacing human-to-human connections in any F.E.A.S.T. program
  • Any situation where a caregiver might believe they are interacting with a human when they are not
     

F.E.A.S.T. will simply not deploy caregiver-facing AI applications. Our community members will always interact with real people.

C. Translation and Multilingual Content

F.E.A.S.T. relies on qualified human translators for published translations of programs like F.E.A.S.T. 30 Days and Family Guides.

AI translation tools may be used to:

  • Generate initial translation drafts to accelerate the translation process
  • Provide rough translations for internal reference
     

III. Ethical Principles and Safeguards

A. Core Commitments

Human Oversight: Every use of AI at F.E.A.S.T. includes human review. Staff and volunteers are responsible for the accuracy, appropriateness, and quality of all content, regardless of how it was created.

Transparency: F.E.A.S.T. will never misrepresent AI-generated content as fully human-created when transparency matters. For example, F.E.A.S.T. will not have AI write blog posts that are assigned to anonymous or human-sounding names.

Privacy Protection: F.E.A.S.T. prioritizes the privacy of our community members. We do not input sensitive case details or confidential community data into AI systems. When AI tools are used for administrative purposes, information is anonymized or generalized.

Alignment with Values: AI use must always support—never undermine—F.E.A.S.T.’s mission to empower caregivers through compassionate human connection, peer support, and evidence-based education.

B. Data Protection and Vendor Requirements

F.E.A.S.T. requires that any AI tools or platforms used by staff, contractors, or vendors must:

  • Not use F.E.A.S.T. inputs for training AI models. This means using enterprise or professional tiers of services (such as ChatGPT Team/Enterprise, Claude Opt-outs, or tools like NotebookLM) that allow for prohibiting training on customer data.
  • Comply with applicable privacy laws and F.E.A.S.T.’s Privacy Policy
  • Provide reasonable security measures to protect any data processed
     

F.E.A.S.T. will not input sensitive personal information about community members (names, specific situations, health details, etc.) into AI systems.

C. Preventing Harm

Given F.E.A.S.T.’s focus on eating disorders, we recognize specific risks:

Misinformation Risk: AI can generate plausible but inaccurate information about eating disorders, treatment, or recovery. All factual content must be verified by knowledgeable humans before publication.

Bias Risk: AI may reflect societal biases about body size, gender, race, or other characteristics. Content must be reviewed for bias and edited to reflect F.E.A.S.T.’s inclusive values.

Tone Risk: AI often lacks the nuance required for sensitive topics. All content shared with caregivers must reflect F.E.A.S.T.’s empathetic, supportive tone and be reviewed for appropriateness.

Trust Risk: If AI use erodes trust in F.E.A.S.T.’s human-centered approach, we will reduce or eliminate that use.
 

IV. Accountability and Quality Control

A. Responsibility

The person using AI tools is responsible for the final output. “AI generated it” is not an acceptable excuse for inaccurate, inappropriate, or harmful content.

B. Error Correction

If AI-assisted content contains errors or causes harm:

  • F.E.A.S.T. will promptly correct the error and notify affected parties if necessary
  • F.E.A.S.T. will review what went wrong and adjust processes to prevent recurrence
  • Repeated issues or concerns regarding any particular AI tool may result in discontinuing its use

C. Feedback

F.E.A.S.T. community members who have concerns about AI use may contact to provide feedback. All concerns will be reviewed and addressed.
 

V. Looking Forward

A. Potential Future Uses

F.E.A.S.T. may explore additional internal AI applications such as:

  • Analyzing aggregated, anonymized data to understand community needs
  • Improving website search functionality
  • Generating alternative versions of content for accessibility (e.g., plain language summaries)
      

Any new AI application will be evaluated against this policy before implementation.

B. What F.E.A.S.T. Will Continue to Avoid

F.E.A.S.T. has no plans to:

  • Replace human peer support with AI
  • Deploy chatbots or automated response systems
  • Use AI for clinical assessment or advice
  • Automate decisions about service delivery
      

Our commitment: The human expertise, lived experience, and compassionate presence of F.E.A.S.T. volunteers and staff will remain central to everything we do.
 

VI. Policy Review

This policy will be reviewed annually or when F.E.A.S.T.’s use of AI changes significantly. Updates will reflect lessons learned, technological developments, and community feedback.

Accessibility Toolbar