AI Meeting Notetakers & PDPA: 4 Legal Risks Every Thai SME Must Know Before Your Next Online Meeting
Hello, fellow business owners. In an era where online meetings via Zoom or Microsoft Teams have become the norm, I suspect many of you have encountered an “uninvited guest” — a bot with a curious name like Otter.ai, Fireflies, or Laxis — silently joining your meeting room with a glowing “Recording” status. As AI meeting notetakers become ubiquitous in Thai workplaces, understanding the legal exposure they create under Thailand’s Personal Data Protection Act (PDPA) is no longer optional — it is a baseline obligation for any responsible business operator.
We all recognise the extraordinary utility of these tools. They transcribe speech in real time, summarise key decisions, and deliver a clean action plan the moment a meeting concludes. But as someone who has worked at the intersection of law and technology for over 20 years, I must offer a frank professional warning: “There is no such thing as a free lunch — and convenience may cost you your most sensitive trade secrets.”

In 2025, a landmark class-action lawsuit in the United States was filed against Otter.ai, alleging that the platform intercepted and leveraged user data without adequate consent. This is a red-alert signal that SMEs, hospitals, and tech startups in Thailand cannot afford to ignore. The question is no longer whether to have an AI usage policy — it is whether you will act before an inadvertent PDPA (Personal Data Protection Act B.E. 2562) violation finds you first.
🤖 What Is an AI Meeting Notetaker — And Why Is It a Double-Edged Sword?
How AI Bots Operate Inside Your Online Meeting Room
An AI meeting notetaker is software powered by Generative AI that converts spoken audio into text (speech-to-text) and produces a structured summary of the meeting. Popular tools such as Otter.ai and Fireflies connect directly to your calendar and can “auto-join” any scheduled meeting — often without the host actively triggering them — because the integration was enabled once and subsequently forgotten.
The Convenience That Comes at a Data Privacy Cost
Every coin has two sides. While AI dramatically increases productivity, from a data security standpoint these tools function as intelligent listening devices that capture every word spoken — including personal disclosures, commercially sensitive strategies, and special category personal data (Sensitive Data) — and transmit all of it to Cloud servers, the majority of which are located outside Thailand. For Thai businesses subject to the PDPA, that cross-border data transfer triggers specific compliance obligations that most organisations are not yet meeting.
⚠️ 4 Legal Risks When Using AI Notetakers Without Proper Safeguards
If you allow employees to bring any AI bot into any meeting without governance controls, you are sitting on a ticking legal time-bomb. Here are the four detonators:
1. The PDPA Trap: Recording Without Explicit Consent
In Thailand, recording a conversation when any participant is unaware of — or has not consented to — the recording is a legally sensitive matter with significant consequences.
- The Law: Under PDPA Section 19, collecting personal data (including recorded voice data) requires the data subject’s consent prior to or at the point of collection. There is no grace period and no implied consent.
- The Risk: If an AI bot joins a meeting and begins recording immediately — without a visible notification or an explicit opt-in mechanism — this constitutes unlawful data collection. Beyond the administrative penalty of up to THB 7 million (citing enforcement precedent from 2024 in which a company was sanctioned for a data security breach), the conduct may also constitute a criminal offence under provisions governing unlawful interception of communications.
2. Data Leakage: When Your Trade Secrets Fund an AI Model’s Training
Have you ever read the Terms of Service of a free-tier AI application? The majority of such agreements state — often in fine print — that your conversation data may be used to “improve and train AI models.”
- The Risk: Imagine your team is in a meeting discussing a “proprietary drug formulation” or a “merger and acquisition (M&A) strategy.” That information is then transmitted to an AI training pipeline. One day, a competitor queries ChatGPT or a similar model and receives output that closely mirrors your confidential strategy. This is not a hypothetical: the Otter.ai class-action complaint filed in California explicitly alleged that non-user participants’ voice data was used for model training without their consent — a disclosure that shocked the enterprise tech community.

3. Court Discoverability: When an AI Summary Becomes Evidence Against You
As a matter of Thai evidence law, anything an AI notetaker records constitutes “electronic evidence” within the meaning of the Electronic Transactions Act B.E. 2544.
- The Problem: AI is not human. It lacks the contextual intelligence to distinguish sarcasm, irony, or office humour from a direct statement of intent.
- The Risk: In the event of litigation — for instance, a wrongful termination claim — the opposing party may invoke a discovery-equivalent mechanism to obtain those AI-generated records. If the AI hallucinated and incorrectly summarised a comment as an admission of liability when you were clearly joking, that fabricated summary could become exhibit evidence against your company before a court can properly assess its reliability.
4. Privilege Waiver: Forfeiting Your Attorney-Client Confidentiality
For hospital executives and legal counsel, this is arguably the most consequential risk of all.
- The Risk: Attorney-client privilege is predicated on secrecy. When an AI bot — a third party — is present in a meeting, recording the conversation and storing it on a foreign Cloud server, a court may well hold that you have voluntarily “waived” that privilege by permitting an outside party to access the protected communication. Once privilege is lost, it cannot be reinstated — and your legal strategy becomes discoverable by the opposing party.
📋 Case Study: The High Cost of AI Misuse in a Healthcare Deal
To illustrate the stakes in concrete terms, consider the following scenario drawn from real-world precedents — including a documented hospital privacy breach in Ontario involving an AI-powered notetaking tool:
The Scenario: “Siam HealthTech Co., Ltd.” (fictitious name), an early-stage startup developing a health application, is in an online meeting with a “prominent private hospital.”
- The Actor: “Nong Ae,” a junior marketing associate, activates an AI Notetaker on the free tier to avoid taking manual notes — without disclosing this to the hospital participants.
- The Incident: The AI bot joins the meeting automatically. During the session, participants discuss names of VIP patients and confidential budget figures.
- The Consequences:
PDPA Breach: The hospital commences legal proceedings against Siam HealthTech for collecting personal data of patients without consent, in violation of PDPA Section 19 and Section 27.
Data Leak: The hospital’s financial data is processed on foreign servers, in direct breach of the hospital’s internal security policy and potentially in violation of PDPA Section 37, which mandates appropriate security measures for personal data under the controller’s custody.
The Damage: The commercial deal collapses. Siam HealthTech faces regulatory fines and sustains severe reputational harm — damage that no startup can absorb in its early years. 
PDPA Violation — AI Meeting Recording Case Study
🛡️ How to Mitigate Risk: The SME’s Three-Layer Defence Framework
My recommendation is not to abandon the technology. It is to use it intelligently and safely. The following “three-layer armour” is what I advise clients to implement immediately — before their next meeting:
1. Vet Your Tools: Enterprise Grade vs. Free Tier
- Do not use free-tier tools for company business: Invest in an Enterprise Plan. This is the only tier where you can be contractually guaranteed that your data will not be used for AI training — look for explicit Zero Data Retention and No Training on Customer Data provisions in the Data Processing Agreement (DPA).
- Audit data storage locations: Select providers who can specify precisely where data is stored and who hold recognised security certifications such as SOC 2 Type II and end-to-end encryption at rest and in transit.
For a broader overview of how to evaluate AI tools against PDPA requirements, see our guide on AI Governance for Thai SMEs.
2. Establish a Meeting Protocol: Green–Amber–Red
🟢 Green (Recording Permitted): Internal team meetings, general brainstorming sessions with no commercially sensitive content.
🟡 Amber (Proceed with Caution): Meetings with external clients or partners — explicit Consent must be obtained and documented before recording commences.
🔴 Red (AI Strictly Prohibited): Board-level deliberations, legal consultations, sensitive HR matters (performance management, disciplinary proceedings), or any discussion involving trade secrets or special category personal data.
3. Implement a Legally Compliant Consent Flow
Change your organisational culture. AI should never enter a meeting room silently.
- Advance Notice: State clearly in the calendar invitation: “This meeting will use an AI-assisted recording and transcription tool.”
- In-Meeting Affirmation: The meeting host must verbally state at the outset: “We will be recording this session using an AI notetaker. If anyone objects, please indicate so now and the recording will not proceed.” (Affirmative Action — not passive assumption.)
- Respect the Veto: If even a single participant objects, the AI must be disabled immediately. Manual minute-taking is the only lawful alternative in that circumstance.
For further guidance on building a PDPA-compliant consent framework, explore our PDPA Compliance Checklist for Thai Businesses.
✅ Conclusion: Building a Culture of Safe and Governed Online Meetings
“Technology is a force multiplier — but wielded without discipline, it becomes a liability multiplier.”
For business owners, saying “No” to AI in certain contexts is not technophobia. It is an exercise of Professionalism and Governance — two qualities that your clients, investors, and regulators increasingly expect as a baseline, not a differentiator.
Executive Action Checklist — Do This This Week:
- Convene your IT and HR teams and ask directly: “Which AI tools are our employees currently using to record meetings — and do we have data processing agreements in place with any of them?”
- Issue a formal company AI Meeting Assistant Policy that classifies meeting types and specifies approved tools, consent procedures, and data retention limits.
- If you will use these tools, invest in an Enterprise licence that contractually protects your data — not a free-tier product that monetises it.
Do not wait for a court summons to arrive at your office before you take this seriously.
Ready to build a legally sound AI framework for your tech business? The Kooru team specialises in AI Governance Audits and technology contract drafting. [Contact us today for an initial consultation] — protect your business before the law catches up with you.

❓ Frequently Asked Questions (FAQ)
1. Does using Otter.ai on the free tier violate Thailand’s PDPA?
— Free-tier usage carries substantial legal risk because such plans typically lack Consent Management features and contractually permit the provider to use your conversation data for model training. If you record the voice of another person without their explicit consent, this likely constitutes a violation of PDPA Section 19 and may expose your organisation to administrative penalties and civil liability.
2. Do I need employee consent for purely internal meetings?
— Yes. Even for internal meetings, the lawful basis for collecting voice data must be established in advance. The most practical approach is to include a provision in your employment contracts or Employee Privacy Policy stating that AI-assisted recording tools may be used for work-related meetings — and to provide a verbal reminder before each recording commences.
3. Can an AI notetaker constitute wiretapping under Thai law?
— If participants are unaware that a recording is being made, the conduct could be characterised as unlawful interception. While Thailand does not have a dedicated Anti-Wiretapping statute equivalent to the U.S. Federal Wiretap Act, the covert collection of personal voice data without consent may violate the PDPA and, depending on the circumstances, provisions of the Criminal Code relating to privacy invasion.
4. How can I verify that my meeting data is not being leaked or misused?
— Conduct a thorough review of the AI provider’s Data Privacy Policy. Specifically, look for confirmation of SOC 2 Type II compliance and an explicit contractual commitment that reads “No training on customer data” — both of which should be present in any Enterprise-grade subscription agreement.
5. What should I do if a client refuses to allow AI recording?
— Comply immediately and without question. Disable the AI tool and revert to manual note-taking. Proceeding with a recording over a participant’s objection destroys trust and exposes your organisation to a formal complaint, regulatory investigation, and potential civil proceedings.
6. How reliable is AI-generated evidence before a Thai court?
— Thai courts do accept electronic evidence, but they assess its probative weight carefully. If an AI transcript lacks a verifiable log file confirming its accuracy, or if the opposing party can demonstrate that the system hallucinated or misrepresented spoken content, the evidentiary weight of that document may be significantly reduced or successfully challenged.
7. How long should AI-recorded meeting data be retained?
— Data should not be retained beyond what is strictly necessary for the stated purpose (the principle of Data Minimisation under PDPA Section 22). A reasonable default policy is to delete recordings and transcripts within 30 to 90 days after the project concludes, materially reducing your exposure in the event of a data breach.
8. What is the legal difference between Fireflies.ai and Otter.ai in Thailand?
— Both platforms present comparable legal risks when used on their free tier. However, the Enterprise plans for both products include Security and Admin Control features — including data processing agreements, role-based access controls, and data residency options — that make PDPA compliance considerably more achievable. The critical variable is not the brand but the contractual terms of the specific subscription.
9. Is transferring meeting data to foreign servers illegal under Thai law?
— Not per se, but it is heavily regulated. PDPA Section 28 requires that any country receiving Thai personal data must provide an “adequate level” of data protection, or that the transfer be covered by Standard Contractual Clauses (SCCs) or equivalent safeguards. Before deploying any AI Cloud tool, verify the data residency location — Singapore-based servers present a materially lower compliance risk than U.S.-based ones, given Singapore’s PDPA-equivalent framework.
10. How should an organisation start drafting an AI Usage Policy?
— Begin by constructing a formal Allowlist/Blocklist of approved and prohibited AI tools. Then establish a Data Classification framework that assigns sensitivity tiers to different categories of information — and specifies clearly which tiers are absolutely prohibited from AI processing. From there, layer in consent procedures, incident response protocols, and a regular review cycle. Kooru offers structured AI Policy development engagements for organisations at every stage of this journey.
Focus Keyword: AI meeting notetaker PDPA Thailand
Secondary Keywords: Otter.ai legal risks Thailand, AI recording consent PDPA, meeting bot data privacy Thailand, PDPA Section 19 voice recording, AI governance SME Thailand
By: Khun Phuwara (ภูวรา ครอบตะคุ) — Senior Advisor in Business Strategy and Legal-Tech, The Kooru
Sources:
https://www.context.news — AI as a Surveillance Tool: Thailand Perspective
https://www.dww.com — An Otter Disaster: Hospital Privacy Breach via AI-Powered Tool
https://otter.ai/privacy-security — Otter.ai Privacy & Security Policy

