AI & Data Practices
This document explains, in plain language, how Lumira uses artificial intelligence, what data is sent to AI systems, how we protect that data, and what safeguards we have in place. This document supplements our Privacy Policy and Terms of Service and is intended to provide transparency about our AI system in accordance with the EU AI Act, GDPR, CCPA/CPRA, India's Digital Personal Data Protection Act 2023 (DPDPA), and Australia's Privacy Act.
LUMIRA IS NOT A MEDICAL DEVICE. LUMIRA IS NOT A SUBSTITUTE FOR PROFESSIONAL MEDICAL ADVICE, DIAGNOSIS, OR TREATMENT. AI-GENERATED CONTENT IS FOR INFORMATIONAL AND EDUCATIONAL PURPOSES ONLY.
All AI outputs should be verified with a qualified healthcare professional before acting on them. In case of a medical emergency, call 911 (US), 999 (UK), 112 (EU/India), 000 (Australia), or your local emergency number immediately.
1. How AI Works in Lumira
AI System Transparency (EU AI Act Compliance)
Lumira uses Claude, a large language model (LLM) developed by Anthropic, Inc. (San Francisco, California). Lumira is classified as a limited-risk AI system under the EU AI Act, as it interacts directly with users and generates content that users may rely upon for parenting decisions. In accordance with Article 50 of the EU AI Act (Regulation (EU) 2024/1689), we provide the following transparency disclosures.
1.1 What the AI does
Lumira's AI performs the following functions:
- Conversational responses. When you submit a check-in or ask a parenting question, the AI generates a personalised, supportive response based on your baby's developmental stage and the context you provide.
- Weekly developmental guides. The AI generates age-appropriate weekly guides covering what to expect at your baby's current stage.
- Concern summaries. When you express a concern during a check-in, the AI generates a structured summary with context, potential explanations, and recommendations (always including a recommendation to consult a healthcare professional).
- Pattern observations. Rule-based logic (not AI) analyses your check-in history to identify trends in sleep, feeding, mood, and energy. The AI may reference these patterns in its responses.
1.2 What the AI does not do
- The AI does not diagnose medical conditions.
- The AI does not prescribe or recommend medications.
- The AI does not make automated decisions with legal or similarly significant effects on you.
- The AI does not replace the judgement of a qualified healthcare professional.
- The AI does not have access to real-time medical databases, clinical trial results, or electronic health records.
- The AI cannot physically examine, observe, or assess your child.
1.3 Human oversight (EU AI Act)
In accordance with the EU AI Act's requirements for human oversight of AI systems:
- All AI system prompts are designed, reviewed, and maintained by Lumira's team.
- The red flag scanner (Section 3) operates as a pre-AI safety layer with human-defined rules.
- Users can report concerning AI responses at any time, and reports are reviewed by humans.
- Users can opt out of AI processing entirely in Settings, maintaining full access to non-AI features.
- Medical disclaimers are hardcoded into the application interface and cannot be overridden by AI output.
2. AI Configuration & Anti-Hallucination Measures
We configure Lumira's AI with safety and accuracy as primary objectives:
2.1 Temperature setting
Lumira uses a temperature setting of 0.4 (on a scale of 0.0 to 1.0) for AI content generation. A lower temperature produces more consistent, predictable responses with less creative variation. We chose 0.4 as a balance between consistency and natural conversational tone. This does not guarantee accuracy — temperature controls randomness in word selection, not factual correctness.
2.2 Anti-hallucination measures
“Hallucination” refers to AI generating plausible-sounding but factually incorrect information. Lumira employs several measures to reduce hallucination risk:
- Structured system prompts. The AI operates under detailed system instructions that constrain its responses to the domain of parenting support and require it to acknowledge uncertainty when appropriate.
- Explicit uncertainty language. The AI is instructed to use hedging language (“this might be,” “some parents find,” “consider asking your doctor about”) rather than making definitive claims about medical or developmental topics.
- No citation of specific studies. The AI is instructed not to cite specific research studies, journal articles, or statistics, as it cannot verify citation accuracy in real time.
- Scope boundaries. The AI is instructed to decline requests outside its scope (e.g., legal advice, financial advice, relationship counselling beyond the parenting context).
- Consistent medical disclaimers. Medical disclaimer text is rendered by the application, not generated by the AI, ensuring it cannot be omitted or altered by AI output.
Important: Despite these measures, AI hallucination cannot be fully eliminated with current technology. AI-generated content may still contain inaccuracies. Always verify information with a qualified professional.
3. Red Flag Scanner
The red flag scanner is a supplementary safety layer, not a diagnostic tool. It may fail to identify serious or emergency medical conditions. It does not replace the judgement of a qualified healthcare professional.
Before your message reaches the AI model, it passes through a red flag scanner — a keyword-based safety system that runs entirely within Lumira's infrastructure (not sent to Anthropic). The scanner covers twelve (12) emergency categories (including breathing emergencies, choking, seizures, high fever in newborns, unresponsiveness, severe bleeding, head injuries, ingestion/poisoning, severe allergic reactions, reduced fetal movement, preterm labour signs, and suicidal ideation) and assigns one of four escalation levels: emergency, urgent, call_doctor, or monitor. The scanner operates as follows:
- Keyword matching. The scanner checks your message against a curated list of keywords and phrases associated with potentially urgent symptoms (e.g., “not breathing,” “high fever,” “seizure,” “blue lips”).
- Immediate safety response. If a red flag is detected, Lumira displays an immediate, prominent safety message directing you to contact emergency services or your healthcare provider, before the AI generates its response.
- Not a filter. The red flag scanner does not prevent the AI from responding. It adds a safety layer on top of the normal AI response flow.
- Rule-based, not AI. The scanner uses deterministic keyword matching, not AI. It is not subject to hallucination but is limited to the keywords in its database.
The red flag scanner keyword list is maintained by Lumira's team and is updated periodically. It is designed to be over-inclusive (false positives are preferred over false negatives).
4. What Data Is Sent to the AI
When Lumira generates an AI response, the following contextual data is included in the prompt sent to Anthropic's Claude API:
| Data sent | Purpose |
|---|---|
| Baby's developmental stage (e.g., “12 weeks old”) | Generate age-appropriate guidance |
| Current check-in data (mood, energy, sleep, feeding notes) | Personalise the response to your current situation |
| Your message or concern text | Respond to your specific question or concern |
| Recent pattern summary (e.g., “sleep has decreased over 5 days”) | Provide contextual observations |
| Pregnancy status and trimester (if applicable) | Generate pregnancy-stage-appropriate content |
| Baby's name (first name only, e.g., “Meera”) | Personalise responses (use baby's name instead of “your baby”) |
| Parent's first name | Personalise the conversational tone |
| Weekly summary of trends (if available) | Provide additional context for more informed responses |
| Recent conversation history (current session) | Maintain conversational continuity within the session |
4.1 What is NOT sent to the AI
| Data not sent | Reason |
|---|---|
| Email address | Not needed for AI response generation |
| Physical address or location | Not collected by Lumira |
| IP address (raw or hashed) | Not relevant to AI processing |
| Account identifiers (user ID, session ID) | Not needed for AI response generation |
| Journal entries | Personal and private; never processed by AI |
| Phone number | Not needed for AI response generation |
| Partner or co-parent data | Not relevant to individual AI interaction |
| Consent records or audit logs | Administrative data; not relevant to AI processing |
| Payment information | Not collected by Lumira directly |
5. Emotional Signal Detection
Disclaimer
Emotional signal detection is an approximate, keyword-based system. It is not a mental health screening tool, diagnostic instrument, or clinical assessment. It cannot detect clinical depression, anxiety disorders, or other mental health conditions. If you are experiencing mental health difficulties, please contact a qualified mental health professional or a crisis helpline.
Lumira includes a basic emotional signal detection system that analyses keywords in your check-in messages to infer approximate emotional states (e.g., “tired,” “struggling,” “overwhelmed,” “happy,” “grateful”). This system:
- Uses keyword matching, not AI-based sentiment analysis.
- Is used to tailor the tone of AI responses (e.g., offering more empathetic language when distress signals are detected).
- May surface wellbeing resources (e.g., postpartum support helplines) when sustained distress patterns are detected over multiple check-ins.
- Does not store emotional state data separately from your check-in records.
- Does not share emotional state data with third parties.
- Can be disabled in Settings → Privacy & Data.
Limitations. This system cannot detect sarcasm, irony, or nuanced emotional expression. It may misidentify your emotional state. If you find the emotional signal detection unhelpful or inaccurate, you may disable it without affecting other Lumira features.
6. Data Retention
6.1 Lumira's data retention
Your check-in data, journal entries, concern summaries, and AI-generated responses are stored in Lumira's database (hosted on Supabase). You control your data retention period through Settings:
- Available retention periods: 12 months, 24 months, or 36 months.
- When data exceeds your chosen retention period, it is automatically and permanently deleted.
- You may request immediate deletion of all your data at any time by deleting your account or contacting privacy@hellolumira.app.
- Consent records are retained for seven (7) years as immutable, append-only entries (required for GDPR accountability). Audit logs are anonymised upon account deletion and retained for seven (7) years.
6.2 Anthropic's data handling
When data is sent to Anthropic's Claude API for processing:
- No training on your data. Lumira uses Anthropic's API with data usage controls that prevent your data from being used to train or improve Anthropic's models. This is contractually guaranteed through our data processing agreement with Anthropic.
- Transient processing. Data sent to Anthropic's API is processed to generate a response and is not retained by Anthropic beyond the duration necessary to generate that response, subject to Anthropic's data retention policies for API customers (which provide for short-term logging for abuse prevention and safety monitoring).
- No human review by Anthropic. Under normal operating conditions, your prompts and responses are not reviewed by Anthropic employees. Anthropic reserves the right to review API logs in limited circumstances (e.g., investigating safety incidents or policy violations), as described in Anthropic's API terms of service.
7. Your Rights by Jurisdiction
Depending on your location, you may have specific rights regarding how your data is processed by Lumira's AI system. For full details, see our Privacy Policy. A summary of key jurisdiction-specific rights related to AI processing follows.
7.1 European Union & United Kingdom (GDPR)
- Lawful basis. AI processing of your data is based on your explicit consent, which you grant during onboarding and may withdraw at any time in Settings.
- Right to explanation. You have the right to understand how AI-generated content is produced. This document serves as our transparency disclosure.
- Right to object. You may object to AI processing at any time by opting out in Settings → Privacy & Data.
- Right to lodge a complaint. You have the right to lodge a complaint with your local data protection supervisory authority (e.g., the ICO in the UK, CNIL in France, BfDI in Germany).
- Data Protection Officer. You may contact our Data Protection Officer at dpo@hellolumira.app.
- Data transfers. Data sent to Anthropic's API is processed in the United States. We rely on Standard Contractual Clauses (SCCs) as the transfer mechanism.
7.2 California, United States (CCPA/CPRA)
- Right to know. You have the right to know what personal information we collect about you, the purposes for which it is used, and the categories of third parties with whom it is shared.
- Right to delete. You may request deletion of your personal information, subject to certain exceptions under the CCPA.
- Right to opt out of sale. Lumira does not sell your personal information. We do not sell personal data to third parties, and we do not share personal data for cross-context behavioural advertising. You have the right to direct us not to sell your personal information by contacting privacy@hellolumira.app with the subject line “Do Not Sell My Personal Information.”
- Right to non-discrimination. We will not discriminate against you for exercising any of your CCPA rights.
- Sensitive personal information. Health-related data you provide (check-in data, concerns, feeding and sleep information) is classified as sensitive personal information under the CPRA. We use this data solely for the purpose of providing the Service and do not use it for purposes beyond what is reasonably expected by consumers.
7.3 India (DPDPA 2023)
- Data fiduciary obligations. Lumira, as a data fiduciary, processes your personal data only for the purposes you have consented to and in accordance with the Digital Personal Data Protection Act 2023.
- Consent. We obtain your clear, informed, and specific consent before processing your personal data. You may withdraw consent at any time through Settings.
- Right to correction and erasure. You have the right to request correction of inaccurate data and erasure of your personal data.
- Right to grievance redressal. You may raise a grievance about our data processing practices by contacting privacy@hellolumira.app. We will acknowledge your grievance within 48 hours and provide a response within 30 days.
- Children's data. Data about your child is processed with your verifiable parental consent, in accordance with the DPDPA's provisions regarding children's data.
7.4 Australia (Privacy Act 1988)
- Australian Privacy Principles (APPs). Lumira complies with the APPs in its collection, use, and disclosure of personal information of Australian users.
- Cross-border disclosure. Your personal information may be disclosed to Anthropic, Inc. in the United States for AI processing. By using the Service, you consent to this cross-border transfer.
- Access and correction. You have the right to access and request correction of your personal information held by Lumira.
- Complaints. If you believe we have breached the APPs, you may lodge a complaint by emailing privacy@hellolumira.app. If you are not satisfied with our response, you may lodge a complaint with the Office of the Australian Information Commissioner (OAIC).
8. Data Security
We implement the following technical and organisational measures to protect data processed by our AI system:
- Encryption in transit. All data sent between your device, Lumira's servers, and Anthropic's API is encrypted using TLS 1.3.
- Encryption at rest. All data stored in Lumira's database is encrypted at rest using AES-256 encryption.
- Minimal data in prompts. We send only the contextual data necessary for generating a relevant response (see Section 4). We do not send your full account profile or historical data to the AI in every request.
- Row-level security. Supabase row-level security (RLS) policies ensure that users can only access their own data.
- IP address hashing. We never store raw IP addresses. All IP addresses are hashed using SHA-256 before storage.
- Audit logging. All consent changes, data access events, and account actions are logged in append-only audit tables.
- Regular review. AI system prompts, safety filters, and data handling practices are reviewed and updated regularly.
9. Opting Out of AI Processing
You have the right to use Lumira without AI-powered features. To opt out:
- Navigate to Settings → Privacy & Data → AI Processing.
- Toggle off “Enable AI-powered responses.”
When AI processing is disabled:
- You will still be able to log daily check-ins, track milestones, and use the journal feature.
- Pattern observations (sleep, feeding, mood trends) will continue to work, as they use rule-based logic, not AI.
- You will not receive AI-generated conversational responses, concern summaries, or weekly developmental guides.
- No data will be sent to Anthropic's API.
You may re-enable AI processing at any time in Settings.
10. Changes to This Document
We may update this AI & Data Practices document as our AI systems, data handling practices, or applicable regulations evolve. When we make material changes, we will notify you via email or through a prominent notice within the Service at least fourteen (14) days before the changes take effect. The version number and effective date at the top of this page indicate the current version.
11. Contact
If you have questions about Lumira's AI system, data practices, or this document, please contact us:
Privacy enquiries: privacy@hellolumira.app
Data Protection Officer: dpo@hellolumira.app
AI safety concerns: safety@hellolumira.app
General legal enquiries: legal@hellolumira.app