Home/Legal/AI disclosures

AI disclosures

How mConsent uses AI, how we disclose it to patients, how we handle customer data, and how we stay aligned with FTC guidance and state AI laws.

Last reviewed April 2026 · Applies to Zaha AI and other AI-powered mConsent features · Contact legal@srswebsolutions.com
Summary

mConsent uses AI in a narrow, product-specific way — most visibly in Zaha AI, our voice receptionist. Every AI interaction a patient experiences is disclosed as AI. We do not use customer data to train generalized models. AI features have explicit human-handoff rules for cases the AI should not handle.

1.The short version

Three commitments define how mConsent deploys AI:

  • Disclosure first. Any AI that interacts directly with a patient identifies itself as AI at the start of the interaction. This is a product requirement, not a switch a practice can disable.
  • Narrow scope, documented handoff. Each AI feature has a defined scope of what it is designed to handle. Anything outside that scope routes to a human.
  • Customer data stays out of generalized training. We do not use data from your patient conversations or your practice configuration to train AI models that serve other customers.

The rest of this page documents each commitment concretely, by product.

2.Products that use AI

AI is used in the following mConsent products. This list is maintained as new features ship.

ProductWhat AI doesWhere disclosure appears
Zaha AI
Voice receptionist
Answers inbound phone calls, recognizes common patient intents (scheduling, FAQs, reschedules, cancellations), books appointments where calendar sync is enabled, sends SMS/email confirmations, routes out-of-scope calls to human handoff. Spoken at the start of every call: “Hi, this is [name], the AI receptionist for [practice].” Also confirmed verbally if a caller asks whether they are speaking with a person.
Reputation Management
Review analysis
Summarizes themes across review text for practice-facing analytics. Does not generate or post reviews on behalf of patients or practices. Disclosed in the practice-facing dashboard; not patient-facing.
Insurance Concierge
AI-assisted verification
Assists human verification specialists with eligibility lookups and denial-pattern flags. Does not autonomously determine coverage without human review. Practice-facing feature; patient interactions are handled by human Concierge agents.

Other mConsent products (Paperless Intake, mPayr, Communication) use automation and integration, but do not use generative or conversational AI in their core workflows as of this page’s last review date.

3.How AI disclosure works

For any AI feature that interacts directly with a patient, disclosure happens at the first point of contact and is reinforced on request. The implementation varies by product.

Zaha AI (voice)

Zaha’s default greeting includes the phrase “AI receptionist” within the first sentence of every call. Practices customize the surrounding greeting script (practice name, hours, on-hold alternative), but the AI identification cannot be removed. If a caller directly asks “Am I talking to a real person?”, Zaha confirms it is an AI voice agent. Call audio is recorded and transcribed to support this verification.

Reputation & Insurance Concierge (non-patient-facing)

Because these features do not interact with patients directly, patient-facing disclosure is not required. Practice-facing users see AI-generated outputs labeled as such in the dashboard (for example, review-theme summaries are labeled “AI summary”).

Practice obligation

Practices that use Zaha AI remain responsible for ensuring their overall patient communications program complies with applicable state AI laws. mConsent’s disclosure is the first layer; practice-specific signage, website disclosures, and patient-facing policies are the practice’s responsibility.

4.State & federal law alignment

mConsent’s AI disclosure and data-handling practices are designed to align with the following authorities. This section is informational; it is not legal advice for your practice.

FTC guidance

The Federal Trade Commission’s “Operation AI Comply” program (announced September 2024) and related guidance prohibit deceptive AI marketing, impersonation of humans, and inflated claims about AI capabilities. mConsent’s disclosure-first approach and its “designed to” language on product pages are responses to this guidance.

Colorado AI Act (SB 24-205)

Colorado’s AI Act (effective February 2026) requires developers and deployers of “high-risk AI systems” to provide clear disclosure and documentation. While a dental-practice voice receptionist is not clearly within the Act’s “high-risk” definition (which focuses on consequential decisions in employment, lending, housing, etc.), mConsent applies the Act’s disclosure expectations as a baseline across all AI features that touch consumers.

California SB 942 (AI Transparency Act)

California’s AI Transparency Act (effective January 2026) requires that AI-generated content interacting with California residents be disclosed. Zaha’s voice disclosure covers this requirement for inbound calls; SMS confirmations sent by Zaha are plainly machine-generated text and are delivered in that context.

HIPAA Privacy Rule

All AI-processed patient data is handled under mConsent’s Business Associate Agreement and the safeguards described in our security practices page.

5.What data AI processes

Per product, this is what AI actually sees and stores.

Zaha AI

  • Incoming call audio from patients calling a practice’s Zaha-enabled line.
  • Transcripts of those calls, generated automatically.
  • Practice configuration data: hours, providers, services, accepted insurance, handoff rules.
  • Calendar data from connected calendars (Google Calendar or supported PMS) for availability and booking.

Audio and transcripts are stored under the BAA with encryption described in our security page. Transcripts are used for dashboard review, call-quality improvement for that specific practice, and tuning the practice-specific response library.

Reputation & Insurance Concierge

Review text (already public) and insurance eligibility data. No patient voice or biometric data is processed by these features.

6.Model training & customer data

mConsent’s policy is straightforward:

Policy commitment

Customer data is not used to train generalized AI models that serve other customers. Your patient conversations, practice configuration, and analytics stay within your tenant.

Practice-specific tuning is a different matter and is expected. When your team flags a call transcript for review, that feedback is used to improve how Zaha handles similar calls for your practice — adjusting your response library, handoff thresholds, and greeting behavior. That practice-specific tuning does not propagate to other practices.

mConsent uses foundation models from third-party AI providers (voice synthesis, speech recognition, language understanding). Those providers have their own data-handling commitments to mConsent under Business Associate or equivalent data-processing agreements. Sub-processors are listed per section 7 of our security page.

7.Human oversight & handoff

Every AI feature in mConsent has explicit rules for when a human takes over. The rules are specific to each product.

Zaha AI handoff triggers

  • Caller asks to speak with a person.
  • Caller describes a clinical concern, pain level, or urgency (dental emergencies).
  • Caller asks about billing, treatment plan specifics, or coverage details.
  • Caller’s request falls outside Zaha’s trained intent set for your practice.
  • Zaha’s confidence in understanding the request falls below a configured threshold.

When any of these fire, Zaha routes the call to the practice’s configured handoff destination (live line, voicemail, on-call contact) with the reason logged to the dashboard.

Insurance Concierge handoff

AI-assist is a support tool for human verification specialists. Specialists review AI outputs and make the final coverage determination. Patients do not interact with the AI directly.

8.When the AI gets it wrong

AI voice agents can misinterpret speech, handle unusual accents poorly, struggle with background noise, or respond incorrectly to edge-case requests. This is inherent to current voice-AI technology; mConsent does not claim otherwise.

Our approach to errors is threefold:

  • Transcripts and audio are retained. You can review any call that went wrong, flag it for your onboarding specialist, and tune Zaha’s response library to handle that scenario better next time.
  • Low-confidence calls default to handoff. Zaha is configured to route uncertain calls to a human rather than guess.
  • We publish our limitations. Product pages state clearly what Zaha is not designed to do. We do not market against our own reality.
Liability note

Practices remain responsible for the downstream consequences of their use of mConsent AI features, including booking errors, missed communications, or incorrect information relayed to patients. mConsent’s liability limitations are defined in your Master Services Agreement.

9.Practice controls & opt-outs

Practices using mConsent have several controls over how AI operates in their instance:

  • Feature-level toggle. Zaha AI is a separately contracted product. Customers who do not want an AI voice agent simply do not subscribe to it.
  • Scope configuration. Within Zaha, practices configure which call types are handled and which route to humans. Start with after-hours only; expand to lunch and peak-time overflow as you gain confidence.
  • Greeting and handoff customization. The practice-specific greeting, handoff destinations, and handoff triggers are all configurable during onboarding and adjustable afterward.
  • Transcript access and review. Every call transcript is available in the dashboard for your team to review, flag, or export.
  • Data export and deletion. On account termination, practice data — including call audio and transcripts — is handled under the retention policy in section 6 of our security page.

If your practice has a specific AI-use concern not covered here, contact legal@srswebsolutions.com. Enterprise customers can negotiate specific AI-use terms within their MSA.