AI SAFETY & LIMITATION POLICIES
AI USE, SAFETY & LIMITATION DISCLAIMER
Applies To: Neurolens, all AI-generated outputs, preliminary assessments, screening insights.
Issued by: SAHCHI HEARING AND SPEECH SOLUTIONS PRIVATE LIMITED (“Gabify”)
1. PURPOSE OF THIS DISCLAIMER
This section explains:
- What AI in Neurolens can do
- What AI cannot do
- Limitations and risks
- User responsibilities
- Boundaries of liability
This is required under AI ethics guidelines and medical-technology compliance worldwide.
2. NEUROLENS IS NOT A DIAGNOSTIC TOOL
Neurolens:
- Does not diagnose Autism, ADHD, or any developmental condition
- Does not replace clinical evaluation, observation, or standardized assessments
- Should not be used as the sole basis for intervention, therapy, or certification decisions
- Provides preliminary insights only, intended for professional interpretation
All AI outputs require human review and validation.
3. LIMITATIONS OF AI-BASED INSIGHTS
Users understand and agree:
3.1 AI May Produce Inaccurate or Incomplete Results
- AI may misinterpret behaviors
- AI outputs depend on quality of input data
- Environmental noise may affect video/audio interpretation
3.2 AI Cannot Understand Nuance Like a Clinician
- Cultural variations
- Contextual cues
- Emotional and social subtleties
- Family and environmental factors
3.3 AI Cannot Replace Standardized Tools
Such as:
- CARS
- M-CHAT
- ADOS
- Clinical neurological evaluation
AI is supportive, not definitive.
4. NO AUTOMATED DECISION MAKING
Neurolens does not make independent determinations.
It is designed to:
- Support clinician thought processes
- Highlight observable patterns
- Reduce documentation workload
- Improve workflow consistency
Clinicians retain full decision-making authority.
5. AI IS PROBABILISTIC, NOT CERTAIN
AI outputs may include:
- Suggested observations
- Pattern indications
- Behavioral markers
- Probabilistic insights
These do not equate to clinical conclusions.
6. NO LEGAL, MEDICAL, OR CERTIFICATION USE
Neurolens outputs must not be used as:
- Medical certificates
- Disability certificates
- Legal evidence
- School placement decisions without evaluation
- Therapy continuation/discontinuation decisions
Gabify is not responsible for misuse.
7. USER AGREES TO ASSUME RESPONSIBILITY
By using Neurolens, the user acknowledges:
- AI is fallible
- Human oversight is mandatory
- AI outputs must be contextualized clinically
- Gabify does not guarantee accuracy
8. LIMITATION OF LIABILITY
Gabify is not liable for:
- Misinterpretation of AI insights
- Clinical decisions made by users
- Use of AI outputs outside intended scope
- Incorrect assessments due to faulty input data
- Improper use by unqualified persons
9. CHANGES TO AI MODELS
Gabify may:
- Update algorithms
- Improve accuracy
- Retrain models
- Add new screening capabilities
without prior notice, as part of continuous improvement.
CLINICAL RESPONSIBILITY & SUPERVISION POLICY
Applies to ALL clinical users (SLP, psychologist, therapist, paediatrician, special educator).
1. PURPOSE
This policy clarifies:
- Clinical responsibilities
- Supervision requirements
- Appropriate use of Neurolens
- Legal expectations from professionals
It ensures clinical safety and compliance.
2. PROFESSIONAL SUPERVISION MANDATE
Neurolens must be used only under supervision of a qualified professional, such as:
- Speech-language pathologist
- Psychologist
- Paediatrician
- Behaviour therapist
- Special educator
The supervisor is responsible for:
- Evaluating AI outputs
- Making final decisions
- Providing clinical context
- Ensuring ethical practices
3. RESPONSIBILITIES OF THE CLINICIAN
Clinicians must:
3.1 Review AI Outputs Carefully
Confirm, modify, or reject AI-generated insights.
3.2 Conduct Independent Observations
Neurolens is only one component of the evaluation.
3.3 Obtain Proper Consent
Including parental consent for child assessments.
3.4 Use Professional Judgment
Clinical reasoning supersedes machine interpretation.
3.5 Ensure Ethical Use of AI
Especially in cases involving:
- Vulnerable children
- Non-verbal individuals
- Neurodiverse populations
- Sensitive family situations
4. SCOPE OF PRACTICE LIMITATIONS
Neurolens may not be used to:
- Diagnose Autism
- Diagnose ADHD
- Provide medical approval
- classNameify severity levels
- Certify disability
- Recommend medication
These require professional evaluation.
5. RESPONSIBILITY OF INSTITUTIONS
Institutions must ensure:
- Only licensed professionals use the platform
- Users receive training
- Staff follow ethical and clinical protocols
- Access controls are implemented
- Consent obligations are fulfilled
6. NON-CLINICAL STAFF RESTRICTIONS
Administrative staff may access:
- Scheduling tools
- Usage dashboards
- Billing interfaces
They may not access:
- Clinical reports
- Child audio/video
- AI screening outputs
- Assessment histories
unless explicitly authorized.
7. PROHIBITION OF UNSUPERVISED USE
Neurolens must not be used by:
- Parents
- Students
- Volunteers
- Untrained staff
- Anyone not clinically certified
Gabify may terminate access for misuse.
8. LIABILITY & DISCLAIMER
Clinicians acknowledge:
- Final decisions rest with them
- AI does not replace diagnostic frameworks
- Gabify is not liable for clinical errors
ALGORITHMIC FAIRNESS, BIAS & QUALITY POLICY
Applies to AI models used in Neurolens for screening, interpretation, and insight generation.
1. PURPOSE
This policy outlines how Gabify ensures:
- Ethical use of AI
- Mitigation of bias
- Transparent functioning
- Model performance monitoring
- Fair treatment of diverse populations
2. COMMITMENT TO FAIRNESS
Gabify commits to:
- Avoid discrimination
- Prevent bias against any demographic
- Support inclusive datasets
- Evaluate edge cases
- Maintain ethical standards
Neurolens AI models are trained using:
- Diverse samples
- Multiple age groups
- Multilingual speech patterns
- Varied socio-cultural backgrounds
3. IDENTIFIED SOURCES OF BIAS
Potential bias may arise from:
- Under-representation of certain populations
- Variations in recording environment
- Cultural behavioral differences
- Socio-linguistic variation
Gabify continually refines models to address these.
4. MEASURES TO MITIGATE BIAS
Gabify implements:
4.1 Human-in-the-Loop (HITL)
Clinicians always interpret results.
4.2 Diverse Data Training
Only de-identified data with consent.
4.3 Model Evaluation Metrics
Accuracy
Precision
Recall
False positives/negatives
4.4 Regular Audits
Bias tests
Output consistency checks
4.5 Clear User Disclaimers
AI limitations are disclosed prominently.
5. TRANSPARENCY OF AI BEHAVIOR
Gabify ensures that:
- AI suggestions are explainable
- Users understand why certain patterns were flagged
- Reports indicate uncertainty when relevant
6. CONTINUOUS MODEL IMPROVEMENT
Neurolens AI models evolve through:
- Performance monitoring
- Feedback from clinicians
- De-identified consented data
- Error analysis
- Feature updates
7. USER RESPONSIBILITY TO REPORT ERRORS
Clinicians must report:
- Incorrect insights
- Bias indicators
- MisclassNameifications
- Harmful outputs
to info@gabify.life
Gabify investigates all such reports.
8. NO USE FOR AUTOMATED PROFILING
Neurolens will not:
- Automatically classNameify children
- Categorize them into risk labels without human oversight
- Make irreversible recommendations
AI outputs are always suggestive, not determinative.